tools/mpy_ld.py: Allow linking static libraries.

This commit introduces an additional symbol resolution mechanism to the
natmod linking process.  This allows the build scripts to look for required
symbols into selected libraries that are provided by the compiler
installation (libgcc and libm at the moment).

For example, using soft-float code in natmods, whilst technically possible,
was not an easy process and required some additional work to pull it off.
With this addition all the manual (and error-prone) operations have been
automated and folded into `tools/mpy_ld.py`.

Both newlib and picolibc toolchains are supported, albeit the latter may
require a bit of extra configuration depending on the environment the build
process runs on.  Picolibc's soft-float functions aren't in libm - in fact
the shipped libm is nothing but a stub - but they are inside libc.  This is
usually not a problem as these changes cater for that configuration quirk,
but on certain compilers the include paths used to find libraries in may
not be updated to take Picolibc's library directory into account.  The bare
metal RISC-V compiler shipped with the CI OS image (GCC 10.2.0 on Ubuntu
22.04LTS) happens to exhibit this very problem.

To work around that for CI builds, the Picolibc libraries' path is
hardcoded in the Makefile directives used by the linker, but this can be
changed by setting the PICOLIBC_ROOT environment library when building
natmods.

Signed-off-by: Volodymyr Shymanskyy <vshymanskyi@gmail.com>
Co-authored-by: Alessandro Gatti <a.gatti@frob.it>
This commit is contained in:
Volodymyr Shymanskyy
2024-09-12 14:39:59 +03:00
committed by Damien George
parent f187c77da8
commit 51976110e2
7 changed files with 438 additions and 80 deletions

View File

@@ -70,6 +70,13 @@ The known limitations are:
So, if your C code has writable data, make sure the data is defined globally,
without an initialiser, and only written to within functions.
The native module is not automatically linked against the standard static libraries
like ``libm.a`` and ``libgcc.a``, which can lead to ``undefined symbol`` errors.
You can link the runtime libraries by setting ``LINK_RUNTIME = 1``
in your Makefile. Custom static libraries can also be linked by adding
``MPY_LD_FLAGS += -l path/to/library.a``. Note that these are linked into
the native module and will not be shared with other modules or the system.
Linker limitation: the native module is not linked against the symbol table of the
full MicroPython firmware. Rather, it is linked against an explicit table of exported
symbols found in ``mp_fun_table`` (in ``py/nativeglue.h``), that is fixed at firmware

View File

@@ -10,5 +10,8 @@ SRC = main.c prod.c test.py
# Architecture to build for (x86, x64, armv7m, xtensa, xtensawin)
ARCH = x64
# Link with libm.a and libgcc.a from the toolchain
LINK_RUNTIME = 1
# Include to get the rules for compiling and linking the module
include $(MPY_DIR)/py/dynruntime.mk

View File

@@ -1,5 +1,6 @@
/* This example demonstrates the following features in a native module:
- using floats
- calling math functions from libm.a
- defining additional code in Python (see test.py)
- have extra C code in a separate file (see prod.c)
*/
@@ -10,6 +11,9 @@
// Include the header for auxiliary C code for this module
#include "prod.h"
// Include standard library header
#include <math.h>
// Automatically detect if this module should include double-precision code.
// If double precision is supported by the target architecture then it can
// be used in native module regardless of what float setting the target
@@ -41,6 +45,12 @@ static mp_obj_t add_d(mp_obj_t x, mp_obj_t y) {
static MP_DEFINE_CONST_FUN_OBJ_2(add_d_obj, add_d);
#endif
// A function that uses libm
static mp_obj_t call_round(mp_obj_t x) {
return mp_obj_new_float_from_f(roundf(mp_obj_get_float_to_f(x)));
}
static MP_DEFINE_CONST_FUN_OBJ_1(round_obj, call_round);
// A function that computes the product of floats in an array.
// This function uses the most general C argument interface, which is more difficult
// to use but has access to the globals dict of the module via self->globals.
@@ -74,6 +84,7 @@ mp_obj_t mpy_init(mp_obj_fun_bc_t *self, size_t n_args, size_t n_kw, mp_obj_t *a
#if USE_DOUBLE
mp_store_global(MP_QSTR_add_d, MP_OBJ_FROM_PTR(&add_d_obj));
#endif
mp_store_global(MP_QSTR_round, MP_OBJ_FROM_PTR(&round_obj));
// The productf function uses the most general C argument interface
mp_store_global(MP_QSTR_productf, MP_DYNRUNTIME_MAKE_FUNCTION(productf));

View File

@@ -29,16 +29,17 @@ CFLAGS += -Wall -Werror -DNDEBUG
CFLAGS += -DNO_QSTR
CFLAGS += -DMICROPY_ENABLE_DYNRUNTIME
CFLAGS += -DMP_CONFIGFILE='<$(CONFIG_H)>'
CFLAGS += -fpic -fno-common
CFLAGS += -U _FORTIFY_SOURCE # prevent use of __*_chk libc functions
#CFLAGS += -fdata-sections -ffunction-sections
CFLAGS_ARCH += -fpic -fno-common
CFLAGS_ARCH += -U_FORTIFY_SOURCE # prevent use of __*_chk libc functions
#CFLAGS_ARCH += -fdata-sections -ffunction-sections
MPY_CROSS_FLAGS += -march=$(ARCH)
SRC_O += $(addprefix $(BUILD)/, $(patsubst %.c,%.o,$(filter %.c,$(SRC))) $(patsubst %.S,%.o,$(filter %.S,$(SRC))))
SRC_MPY += $(addprefix $(BUILD)/, $(patsubst %.py,%.mpy,$(filter %.py,$(SRC))))
CLEAN_EXTRA += $(MOD).mpy
CLEAN_EXTRA += $(MOD).mpy .mpy_ld_cache
################################################################################
# Architecture configuration
@@ -47,72 +48,74 @@ ifeq ($(ARCH),x86)
# x86
CROSS =
CFLAGS += -m32 -fno-stack-protector
CFLAGS_ARCH += -m32 -fno-stack-protector
MICROPY_FLOAT_IMPL ?= double
else ifeq ($(ARCH),x64)
# x64
CROSS =
CFLAGS += -fno-stack-protector
CFLAGS_ARCH += -fno-stack-protector
MICROPY_FLOAT_IMPL ?= double
else ifeq ($(ARCH),armv6m)
# thumb
CROSS = arm-none-eabi-
CFLAGS += -mthumb -mcpu=cortex-m0
CFLAGS_ARCH += -mthumb -mcpu=cortex-m0
MICROPY_FLOAT_IMPL ?= none
else ifeq ($(ARCH),armv7m)
# thumb
CROSS = arm-none-eabi-
CFLAGS += -mthumb -mcpu=cortex-m3
CFLAGS_ARCH += -mthumb -mcpu=cortex-m3
MICROPY_FLOAT_IMPL ?= none
else ifeq ($(ARCH),armv7emsp)
# thumb
CROSS = arm-none-eabi-
CFLAGS += -mthumb -mcpu=cortex-m4
CFLAGS += -mfpu=fpv4-sp-d16 -mfloat-abi=hard
CFLAGS_ARCH += -mthumb -mcpu=cortex-m4
CFLAGS_ARCH += -mfpu=fpv4-sp-d16 -mfloat-abi=hard
MICROPY_FLOAT_IMPL ?= float
else ifeq ($(ARCH),armv7emdp)
# thumb
CROSS = arm-none-eabi-
CFLAGS += -mthumb -mcpu=cortex-m7
CFLAGS += -mfpu=fpv5-d16 -mfloat-abi=hard
CFLAGS_ARCH += -mthumb -mcpu=cortex-m7
CFLAGS_ARCH += -mfpu=fpv5-d16 -mfloat-abi=hard
MICROPY_FLOAT_IMPL ?= double
else ifeq ($(ARCH),xtensa)
# xtensa
CROSS = xtensa-lx106-elf-
CFLAGS += -mforce-l32
CFLAGS_ARCH += -mforce-l32
MICROPY_FLOAT_IMPL ?= none
else ifeq ($(ARCH),xtensawin)
# xtensawin
CROSS = xtensa-esp32-elf-
CFLAGS +=
MICROPY_FLOAT_IMPL ?= float
else ifeq ($(ARCH),rv32imc)
# rv32imc
CROSS = riscv64-unknown-elf-
CFLAGS += -march=rv32imac -mabi=ilp32 -mno-relax
CFLAGS_ARCH += -march=rv32imac -mabi=ilp32 -mno-relax
# If Picolibc is available then select it explicitly. Ubuntu 22.04 ships its
# bare metal RISC-V toolchain with Picolibc rather than Newlib, and the default
# is "nosys" so a value must be provided. To avoid having per-distro
# workarounds, always select Picolibc if available.
PICOLIBC_SPECS = $(shell $(CROSS)gcc --print-file-name=picolibc.specs)
PICOLIBC_SPECS := $(shell $(CROSS)gcc --print-file-name=picolibc.specs)
ifneq ($(PICOLIBC_SPECS),picolibc.specs)
CFLAGS += --specs=$(PICOLIBC_SPECS)
CFLAGS_ARCH += -specs=$(PICOLIBC_SPECS)
USE_PICOLIBC := 1
PICOLIBC_ARCH := rv32imac
PICOLIBC_ABI := ilp32
endif
MICROPY_FLOAT_IMPL ?= none
@@ -122,7 +125,47 @@ $(error architecture '$(ARCH)' not supported)
endif
MICROPY_FLOAT_IMPL_UPPER = $(shell echo $(MICROPY_FLOAT_IMPL) | tr '[:lower:]' '[:upper:]')
CFLAGS += -DMICROPY_FLOAT_IMPL=MICROPY_FLOAT_IMPL_$(MICROPY_FLOAT_IMPL_UPPER)
CFLAGS += $(CFLAGS_ARCH) -DMICROPY_FLOAT_IMPL=MICROPY_FLOAT_IMPL_$(MICROPY_FLOAT_IMPL_UPPER)
ifeq ($(LINK_RUNTIME),1)
# All of these picolibc-specific directives are here to work around a
# limitation of Ubuntu 22.04's RISC-V bare metal toolchain. In short, the
# specific version of GCC in use (10.2.0) does not seem to take into account
# extra paths provided by an explicitly passed specs file when performing name
# resolution via `--print-file-name`.
#
# If Picolibc is used and libc.a fails to resolve, then said file's path will
# be computed by searching the Picolibc libraries root for a libc.a file in a
# subdirectory whose path is built using the current `-march` and `-mabi`
# flags that are passed to GCC. The `PICOLIBC_ROOT` environment variable is
# checked to override the starting point for the library file search, and if
# it is not set then the default value is used, assuming that this is running
# on an Ubuntu 22.04 machine.
#
# This should be revised when the CI base image is updated to a newer Ubuntu
# version (that hopefully contains a newer RISC-V compiler) or to another Linux
# distribution.
ifeq ($(USE_PICOLIBC),1)
LIBM_NAME := libc.a
else
LIBM_NAME := libm.a
endif
LIBGCC_PATH := $(realpath $(shell $(CROSS)gcc $(CFLAGS) --print-libgcc-file-name))
LIBM_PATH := $(realpath $(shell $(CROSS)gcc $(CFLAGS) --print-file-name=$(LIBM_NAME)))
ifeq ($(USE_PICOLIBC),1)
ifeq ($(LIBM_PATH),)
# The CROSS toolchain prefix usually ends with a dash, but that may not be
# always the case. If the prefix ends with a dash it has to be taken out as
# Picolibc's architecture directory won't have it in its name. GNU Make does
# not have any facility to perform character-level text manipulation so we
# shell out to sed.
CROSS_PREFIX := $(shell echo $(CROSS) | sed -e 's/-$$//')
PICOLIBC_ROOT ?= /usr/lib/picolibc/$(CROSS_PREFIX)/lib
LIBM_PATH := $(PICOLIBC_ROOT)/$(PICOLIBC_ARCH)/$(PICOLIBC_ABI)/$(LIBM_NAME)
endif
endif
MPY_LD_FLAGS += $(addprefix -l, $(LIBGCC_PATH) $(LIBM_PATH))
endif
CFLAGS += $(CFLAGS_EXTRA)
@@ -165,7 +208,7 @@ $(BUILD)/%.mpy: %.py
# Build native .mpy from object files
$(BUILD)/$(MOD).native.mpy: $(SRC_O)
$(ECHO) "LINK $<"
$(Q)$(MPY_LD) --arch $(ARCH) --qstrs $(CONFIG_H) -o $@ $^
$(Q)$(MPY_LD) --arch $(ARCH) --qstrs $(CONFIG_H) $(MPY_LD_FLAGS) -o $@ $^
# Build final .mpy from all intermediate .mpy files
$(MOD).mpy: $(BUILD)/$(MOD).native.mpy $(SRC_MPY)

236
tools/ar_util.py Normal file
View File

@@ -0,0 +1,236 @@
#!/usr/bin/env python3
#
# This file is part of the MicroPython project, http://micropython.org/
#
# The MIT License (MIT)
#
# Copyright (c) 2024 Volodymyr Shymanskyy
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
import os
import re
import hashlib
import functools
import pickle
from elftools.elf import elffile
from collections import defaultdict
try:
from ar import Archive
except:
Archive = None
class PickleCache:
def __init__(self, path, prefix=""):
self.path = path
self._get_fn = lambda key: os.path.join(path, prefix + key[:24])
def store(self, key, data):
os.makedirs(self.path, exist_ok=True)
# See also https://bford.info/cachedir/
cachedir_tag_path = os.path.join(self.path, "CACHEDIR.TAG")
if not os.path.exists(cachedir_tag_path):
with open(cachedir_tag_path, "w") as f:
f.write(
"Signature: 8a477f597d28d172789f06886806bc55\n"
"# This file is a cache directory tag created by MicroPython.\n"
"# For information about cache directory tags see https://bford.info/cachedir/\n"
)
with open(self._get_fn(key), "wb") as f:
pickle.dump(data, f)
def load(self, key):
with open(self._get_fn(key), "rb") as f:
return pickle.load(f)
def cached(key, cache):
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs):
cache_key = key(*args, **kwargs)
try:
d = cache.load(cache_key)
if d["key"] != cache_key:
raise Exception("Cache key mismatch")
return d["data"]
except Exception:
res = func(*args, **kwargs)
try:
cache.store(
cache_key,
{
"key": cache_key,
"data": res,
},
)
except Exception:
pass
return res
return wrapper
return decorator
class CachedArFile:
def __init__(self, fn):
if not Archive:
raise RuntimeError("Please run 'pip install ar' to link .a files")
self.fn = fn
self._archive = Archive(open(fn, "rb"))
info = self.load_symbols()
self.objs = info["objs"]
self.symbols = info["symbols"]
def open(self, obj):
return self._archive.open(obj, "rb")
def _cache_key(self):
sha = hashlib.sha256()
with open(self.fn, "rb") as f:
for chunk in iter(lambda: f.read(4096), b""):
sha.update(chunk)
# Change this salt if the cache data format changes
sha.update(bytes.fromhex("00000000000000000000000000000001"))
return sha.hexdigest()
@cached(key=_cache_key, cache=PickleCache(path=".mpy_ld_cache", prefix="ar_"))
def load_symbols(self):
print("Loading", self.fn)
objs = defaultdict(lambda: {"def": set(), "undef": set(), "weak": set()})
symbols = {}
for entry in self._archive:
obj_name = entry.name
elf = elffile.ELFFile(self.open(obj_name))
symtab = elf.get_section_by_name(".symtab")
if not symtab:
continue
obj = objs[obj_name]
for symbol in symtab.iter_symbols():
sym_name = symbol.name
sym_bind = symbol["st_info"]["bind"]
if sym_bind in ("STB_GLOBAL", "STB_WEAK"):
if symbol.entry["st_shndx"] != "SHN_UNDEF":
obj["def"].add(sym_name)
symbols[sym_name] = obj_name
else:
obj["undef"].add(sym_name)
if sym_bind == "STB_WEAK":
obj["weak"].add(sym_name)
return {"objs": dict(objs), "symbols": symbols}
def resolve(archives, symbols):
resolved_objs = [] # Object files needed to resolve symbols
unresolved_symbols = set()
provided_symbols = {} # Which symbol is provided by which object
symbol_stack = list(symbols)
# A helper function to handle symbol resolution from a particular object
def add_obj(archive, symbol):
obj_name = archive.symbols[symbol]
obj_info = archive.objs[obj_name]
obj_tuple = (archive, obj_name)
if obj_tuple in resolved_objs:
return # Already processed this object
resolved_objs.append(obj_tuple)
# Add the symbols this object defines
for defined_symbol in obj_info["def"]:
if defined_symbol in provided_symbols and not defined_symbol.startswith(
"__x86.get_pc_thunk."
):
if defined_symbol in obj_info["weak"]:
continue
else:
raise RuntimeError(f"Multiple definitions for {defined_symbol}")
provided_symbols[defined_symbol] = obj_name # TODO: mark weak if needed
# Recursively add undefined symbols from this object
for undef_symbol in obj_info["undef"]:
if undef_symbol in obj_info["weak"]:
print(f"Skippping weak dependency: {undef_symbol}")
continue
if undef_symbol not in provided_symbols:
symbol_stack.append(undef_symbol) # Add undefined symbol to resolve
while symbol_stack:
symbol = symbol_stack.pop(0)
if symbol in provided_symbols:
continue # Symbol is already resolved
found = False
for archive in archives:
if symbol in archive.symbols:
add_obj(archive, symbol)
found = True
break
if not found:
unresolved_symbols.add(symbol)
return resolved_objs, list(unresolved_symbols)
def expand_ld_script(fn):
# This function parses a subset of ld scripts
# Typically these are just groups of static lib references
group_pattern = re.compile(r"GROUP\s*\(\s*([^\)]+)\s*\)", re.MULTILINE)
output_format_pattern = re.compile(r"OUTPUT_FORMAT\s*\(\s*([^\)]+)\s*\)", re.MULTILINE)
comment_pattern = re.compile(r"/\*.*?\*/", re.MULTILINE | re.DOTALL)
with open(fn, "r") as f:
content = f.read()
content = comment_pattern.sub("", content).strip()
# Ensure no unrecognized instructions
leftovers = content
for pattern in (group_pattern, output_format_pattern):
leftovers = pattern.sub("", leftovers)
if leftovers.strip():
raise ValueError("Invalid instruction found in the ld script:" + leftovers)
# Extract files from GROUP instructions
files = []
for match in group_pattern.findall(content):
files.extend([file.strip() for file in re.split(r"[,\s]+", match) if file.strip()])
return files
def load_archive(fn):
ar_header = b"!<arch>\012"
with open(fn, "rb") as f:
is_ar_file = f.read(len(ar_header)) == ar_header
if is_ar_file:
return [CachedArFile(fn)]
else:
return [CachedArFile(item) for item in expand_ld_script(fn)]

View File

@@ -155,12 +155,15 @@ PYTHON_VER=$(python --version | cut -d' ' -f2)
export IDF_CCACHE_ENABLE=1
function ci_esp32_idf_setup {
pip3 install pyelftools
git clone --depth 1 --branch $IDF_VER https://github.com/espressif/esp-idf.git
# doing a treeless clone isn't quite as good as --shallow-submodules, but it
# is smaller than full clones and works when the submodule commit isn't a head.
git -C esp-idf submodule update --init --recursive --filter=tree:0
./esp-idf/install.sh
# Install additional packages for mpy_ld into the IDF env
source esp-idf/export.sh
pip3 install pyelftools
pip3 install ar
}
function ci_esp32_build_common {
@@ -287,6 +290,7 @@ function ci_qemu_setup_arm {
sudo apt-get update
sudo apt-get install qemu-system
sudo pip3 install pyelftools
sudo pip3 install ar
qemu-system-arm --version
}
@@ -295,6 +299,7 @@ function ci_qemu_setup_rv32 {
sudo apt-get update
sudo apt-get install qemu-system
sudo pip3 install pyelftools
sudo pip3 install ar
qemu-system-riscv32 --version
}
@@ -385,6 +390,7 @@ function ci_samd_build {
function ci_stm32_setup {
ci_gcc_arm_setup
pip3 install pyelftools
pip3 install ar
pip3 install pyhy
}
@@ -503,18 +509,40 @@ function ci_native_mpy_modules_build {
else
arch=$1
fi
for natmod in features1 features3 features4 deflate framebuf heapq random re
for natmod in features1 features3 features4 heapq re
do
make -C examples/natmod/$natmod clean
make -C examples/natmod/$natmod ARCH=$arch
done
# btree requires thread local storage support on rv32imc.
if [ $arch != rv32imc ]; then
make -C examples/natmod/btree ARCH=$arch
# deflate, framebuf, and random currently cannot build on xtensa due to
# some symbols that have been removed from the compiler's runtime, in
# favour of being provided from ROM.
if [ $arch != "xtensa" ]; then
for natmod in deflate framebuf random
do
make -C examples/natmod/$natmod clean
make -C examples/natmod/$natmod ARCH=$arch
done
fi
# features2 requires soft-float on armv7m and rv32imc.
if [ $arch != rv32imc ] && [ $arch != armv7m ]; then
# features2 requires soft-float on armv7m, rv32imc, and xtensa. On armv6m
# the compiler generates absolute relocations in the object file
# referencing soft-float functions, which is not supported at the moment.
make -C examples/natmod/features2 clean
if [ $arch = "rv32imc" ] || [ $arch = "armv7m" ] || [ $arch = "xtensa" ]; then
make -C examples/natmod/features2 ARCH=$arch MICROPY_FLOAT_IMPL=float
elif [ $arch != "armv6m" ]; then
make -C examples/natmod/features2 ARCH=$arch
fi
# btree requires thread local storage support on rv32imc, whilst on xtensa
# it relies on symbols that are provided from ROM but not exposed to
# natmods at the moment.
if [ $arch != "rv32imc" ] && [ $arch != "xtensa" ]; then
make -C examples/natmod/btree clean
make -C examples/natmod/btree ARCH=$arch
fi
}
function ci_native_mpy_modules_32bit_build {
@@ -550,6 +578,7 @@ function ci_unix_standard_v2_run_tests {
function ci_unix_coverage_setup {
sudo pip3 install setuptools
sudo pip3 install pyelftools
sudo pip3 install ar
gcc --version
python3 --version
}
@@ -598,6 +627,7 @@ function ci_unix_32bit_setup {
sudo apt-get install gcc-multilib g++-multilib libffi-dev:i386 python2.7
sudo pip3 install setuptools
sudo pip3 install pyelftools
sudo pip3 install ar
gcc --version
python2.7 --version
python3 --version

View File

@@ -30,6 +30,7 @@ Link .o files to .mpy
import sys, os, struct, re
from elftools.elf import elffile
import ar_util
sys.path.append(os.path.dirname(__file__) + "/../py")
import makeqstrdata as qstrutil
@@ -664,7 +665,7 @@ def do_relocation_text(env, text_addr, r):
R_XTENSA_PDIFF32,
R_XTENSA_ASM_EXPAND,
):
if s.section.name.startswith(".text"):
if not hasattr(s, "section") or s.section.name.startswith(".text"):
# it looks like R_XTENSA_[P]DIFF32 into .text is already correctly relocated,
# and expand relaxations cannot occur in non-executable sections.
return
@@ -1075,59 +1076,59 @@ def process_riscv32_relocation(env, text_addr, r):
return addr, value
def load_object_file(env, felf):
with open(felf, "rb") as f:
elf = elffile.ELFFile(f)
env.check_arch(elf["e_machine"])
def load_object_file(env, f, felf):
elf = elffile.ELFFile(f)
env.check_arch(elf["e_machine"])
# Get symbol table
symtab = list(elf.get_section_by_name(".symtab").iter_symbols())
# Get symbol table
symtab = list(elf.get_section_by_name(".symtab").iter_symbols())
# Load needed sections from ELF file
sections_shndx = {} # maps elf shndx to Section object
for idx, s in enumerate(elf.iter_sections()):
if s.header.sh_type in ("SHT_PROGBITS", "SHT_NOBITS"):
if s.data_size == 0:
# Ignore empty sections
pass
elif s.name.startswith((".literal", ".text", ".rodata", ".data.rel.ro", ".bss")):
sec = Section.from_elfsec(s, felf)
sections_shndx[idx] = sec
if s.name.startswith(".literal"):
env.literal_sections.append(sec)
else:
env.sections.append(sec)
elif s.name.startswith(".data"):
raise LinkError("{}: {} non-empty".format(felf, s.name))
# Load needed sections from ELF file
sections_shndx = {} # maps elf shndx to Section object
for idx, s in enumerate(elf.iter_sections()):
if s.header.sh_type in ("SHT_PROGBITS", "SHT_NOBITS"):
if s.data_size == 0:
# Ignore empty sections
pass
elif s.name.startswith((".literal", ".text", ".rodata", ".data.rel.ro", ".bss")):
sec = Section.from_elfsec(s, felf)
sections_shndx[idx] = sec
if s.name.startswith(".literal"):
env.literal_sections.append(sec)
else:
# Ignore section
pass
elif s.header.sh_type in ("SHT_REL", "SHT_RELA"):
shndx = s.header.sh_info
if shndx in sections_shndx:
sec = sections_shndx[shndx]
sec.reloc_name = s.name
sec.reloc = list(s.iter_relocations())
for r in sec.reloc:
r.sym = symtab[r["r_info_sym"]]
# Link symbols to their sections, and update known and unresolved symbols
for sym in symtab:
sym.filename = felf
shndx = sym.entry["st_shndx"]
env.sections.append(sec)
elif s.name.startswith(".data"):
raise LinkError("{}: {} non-empty".format(felf, s.name))
else:
# Ignore section
pass
elif s.header.sh_type in ("SHT_REL", "SHT_RELA"):
shndx = s.header.sh_info
if shndx in sections_shndx:
# Symbol with associated section
sym.section = sections_shndx[shndx]
if sym["st_info"]["bind"] in ("STB_GLOBAL", "STB_WEAK"):
# Defined global symbol
if sym.name in env.known_syms and not sym.name.startswith(
"__x86.get_pc_thunk."
):
raise LinkError("duplicate symbol: {}".format(sym.name))
env.known_syms[sym.name] = sym
elif sym.entry["st_shndx"] == "SHN_UNDEF" and sym["st_info"]["bind"] == "STB_GLOBAL":
# Undefined global symbol, needs resolving
env.unresolved_syms.append(sym)
sec = sections_shndx[shndx]
sec.reloc_name = s.name
sec.reloc = list(s.iter_relocations())
for r in sec.reloc:
r.sym = symtab[r["r_info_sym"]]
# Link symbols to their sections, and update known and unresolved symbols
dup_errors = []
for sym in symtab:
sym.filename = felf
shndx = sym.entry["st_shndx"]
if shndx in sections_shndx:
# Symbol with associated section
sym.section = sections_shndx[shndx]
if sym["st_info"]["bind"] in ("STB_GLOBAL", "STB_WEAK"):
# Defined global symbol
if sym.name in env.known_syms and not sym.name.startswith("__x86.get_pc_thunk."):
dup_errors.append("duplicate symbol: {}".format(sym.name))
env.known_syms[sym.name] = sym
elif sym.entry["st_shndx"] == "SHN_UNDEF" and sym["st_info"]["bind"] == "STB_GLOBAL":
# Undefined global symbol, needs resolving
env.unresolved_syms.append(sym)
if len(dup_errors):
raise LinkError("\n".join(dup_errors))
def link_objects(env, native_qstr_vals_len):
@@ -1188,6 +1189,8 @@ def link_objects(env, native_qstr_vals_len):
]
)
}
undef_errors = []
for sym in env.unresolved_syms:
assert sym["st_value"] == 0
if sym.name == "_GLOBAL_OFFSET_TABLE_":
@@ -1205,7 +1208,10 @@ def link_objects(env, native_qstr_vals_len):
sym.section = mp_fun_table_sec
sym.mp_fun_table_offset = fun_table[sym.name]
else:
raise LinkError("{}: undefined symbol: {}".format(sym.filename, sym.name))
undef_errors.append("{}: undefined symbol: {}".format(sym.filename, sym.name))
if len(undef_errors):
raise LinkError("\n".join(undef_errors))
# Align sections, assign their addresses, and create full_text
env.full_text = bytearray(env.arch.asm_jump(8)) # dummy, to be filled in later
@@ -1446,8 +1452,27 @@ def do_link(args):
log(LOG_LEVEL_2, "qstr vals: " + ", ".join(native_qstr_vals))
env = LinkEnv(args.arch)
try:
for file in args.files:
load_object_file(env, file)
# Load object files
for fn in args.files:
with open(fn, "rb") as f:
load_object_file(env, f, fn)
if args.libs:
# Load archive info
archives = []
for item in args.libs:
archives.extend(ar_util.load_archive(item))
# List symbols to look for
syms = set(sym.name for sym in env.unresolved_syms)
# Resolve symbols from libs
lib_objs, _ = ar_util.resolve(archives, syms)
# Load extra object files from libs
for ar, obj in lib_objs:
obj_name = ar.fn + ":" + obj
log(LOG_LEVEL_2, "using " + obj_name)
with ar.open(obj) as f:
load_object_file(env, f, obj_name)
link_objects(env, len(native_qstr_vals))
build_mpy(env, env.find_addr("mpy_init"), args.output, native_qstr_vals)
except LinkError as er:
@@ -1458,13 +1483,16 @@ def do_link(args):
def main():
import argparse
cmd_parser = argparse.ArgumentParser(description="Run scripts on the pyboard.")
cmd_parser = argparse.ArgumentParser(description="Link native object files into a MPY bundle.")
cmd_parser.add_argument(
"--verbose", "-v", action="count", default=1, help="increase verbosity"
)
cmd_parser.add_argument("--arch", default="x64", help="architecture")
cmd_parser.add_argument("--preprocess", action="store_true", help="preprocess source files")
cmd_parser.add_argument("--qstrs", default=None, help="file defining additional qstrs")
cmd_parser.add_argument(
"--libs", "-l", dest="libs", action="append", help="static .a libraries to link"
)
cmd_parser.add_argument(
"--output", "-o", default=None, help="output .mpy file (default to input with .o->.mpy)"
)