This commit performs some minor clean up for the code involved in Viper
load/store operations when said operations have an integer index.
Most platform-specific code blocks were able to generate correct opcodes
even when the index is 0, but they would still fall back to the general
case. The general case would still emit a shortened opcode sequence so
this commit does not alter the overall behaviour, but makes it easier to
extend platform-specific code whenever the full index range is going to
be handled rather than a subset of indices as it is now.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit cleans up the Viper code generation blocks for
register-indexed load and store operations.
An attempt is made to simplify the code in the common code generator
code block, by moving architecture-specific code to the appropriate
native generation backends whenever possible. This should make that
specific bit of code in the Viper generator clearer and easier to
maintain in the long term.
To achieve this, six generic assembler meta-opcodes have been
introduced, named `ASM_{LOAD,STORE}{8,16,32}_REG_REG_REG`. A
platform-independent implementation for those operations is provided, so
backends that cannot emit a shorter sequence for the requested operation
or are fine with the platform-independent implementation can just not
provide said meta-opcodes.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit lets the Viper code generator use optimised code sequence
for register-indexed load and store operations when generating Thumb
code.
Register-indexed load and store operations for Thumb now can take at
most two machine opcodes for halfword and word values, and just a single
machine opcode for byte values. The original implementation could
generate up to four opcodes in the worst case (dealing with word
values).
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit lets the Viper code generator use optimised code sequences
for register-indexed load and store operations when generating Arm code.
The existing code defaulted to generic multi-operations code sequences
for Arm code on most cases. Now optimised implementations are provided
for register-indexed loads and stores of all data sizes, taking at most
two machine opcodes for each operation.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit fixes two Xtensa sequences in order to terminate early when
loading and storing word values via an immediate index.
This was meant to be part of 55ca3fd675
but whilst it was part of the code being tested, it didn't end up in the
commit.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit marks as const the condition code tables used when figuring
out which opcode sequence must be emitted depending on the requested
comparison type.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit introduces the ability to emit optimised code paths on
Xtensa for load/store operations indexed via an immediate offset.
If an immediate offset for a load/store operation is within a certain
range that allows it to be embedded into an available opcode then said
opcode is emitted instead of the generic code sequence.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit improves the RV32 code sequence that is emitted if a
function needs to set up an exception handler as its prologue.
The old code would clear a temporary register and then copy that value
to places that needed to be initialised with zero values. On RV32
there's a dedicated register that's hardwired to be equal to zero, which
allows us to bypass the extra register clear and use the zero register
to initialise values.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit improves the emitted code sequences for address generation
in the Viper subsystem when loading/storing 16 and 32 bit values via a
register offset.
The Xtensa opcodes ADDX2 and ADDX4 are used to avoid performing the
extra shifts to align the final operation offset. Those opcodes are
available on both xtensa and xtensawin MicroPython architectures.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
The `emit_load_reg_with_object()` helper function will clobber `REG_TEMP0`.
This is currently OK on architectures where `REG_RET` and `REG_TEMP0` are
the same (all architectures except RV32), because all callers of
`emit_load_reg_with_object()` use either `REG_RET` or `REG_TEMP0` as the
destination register. But on RV32 these registers are different and so
when `REG_RET` is the destination, `REG_TEMP0` is clobbered, leading to
incorrectly generated machine code.
This commit fixes the issue simply by using `REG_TEMP0` as the destination
register for all uses of `emit_load_reg_with_object()`, and adds a comment
to make sure the caller of this function is careful.
Signed-off-by: Damien George <damien@micropython.org>
The code generating the entry to the finally handler of an async-with
statement was simply wrong for the case of the native emitter. Among other
things the layout of the stack was incorrect.
This is fixed by this commit. The setup of the async-with finally handler
is now put in a dedicated emit function, for both the bytecode and native
emitters to implement in their own way (the bytecode emitter is unchanged,
just factored to a function).
With this fix all of the async-with tests now work when using the native
emitter.
Signed-off-by: Damien George <damien@micropython.org>
A value thrown/injected into a native generator needs to be stored in a
dedicated variable outside `nlr_buf_t`, following the `inject_exc` variable
in `py/vm.c`.
Signed-off-by: Damien George <damien@micropython.org>
This emitter prints out pseudo-machine instructions, instead of the usual
output of the native emitter. It can be enabled on any port via
`MICROPY_EMIT_NATIVE_DEBUG` (make sure other native emitters are disabled)
but the easiest way to use it is with mpy-cross:
$ mpy-cross -march=debug file.py
Signed-off-by: Damien George <damien@micropython.org>
Selected load/store code sequences have been optimised for RV32IMC when the
chance to use fewer and smaller opcodes was possible.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This adds a native code generation backend for RISC-V RV32I CPUs, currently
limited to the I, M, and C instruction sets.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
The STATIC macro was introduced a very long time ago in commit
d5df6cd44a. The original reason for this was
to have the option to define it to nothing so that all static functions
become global functions and therefore visible to certain debug tools, so
one could do function size comparison and other things.
This STATIC feature is rarely (if ever) used. And with the use of LTO and
heavy inline optimisation, analysing the size of individual functions when
they are not static is not a good representation of the size of code when
fully optimised.
So the macro does not have much use and it's simpler to just remove it.
Then you know exactly what it's doing. For example, newcomers don't have
to learn what the STATIC macro is and why it exists. Reading the code is
also less "loud" with a lowercase static.
One other minor point in favour of removing it, is that it stops bugs with
`STATIC inline`, which should always be `static inline`.
Methodology for this commit was:
1) git ls-files | egrep '\.[ch]$' | \
xargs sed -Ei "s/(^| )STATIC($| )/\1static\2/"
2) Do some manual cleanup in the diff by searching for the word STATIC in
comments and changing those back.
3) "git-grep STATIC docs/", manually fixed those cases.
4) "rg -t python STATIC", manually fixed codegen lines that used STATIC.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
Now native functions and native generators have similar behaviour: the
first machine-word of their code is an index to get to the prelude. This
simplifies the handling of these types of functions, and also reduces the
size of the emitted native machine code by no longer requiring special code
at the start of the function to load a pointer to the prelude.
Signed-off-by: Damien George <damien@micropython.org>
Allows bytecode itself to be used instead of an mp_raw_code_t in the simple
and common cases of a bytecode function without any children.
This can be used to further reduce frozen code size, and has the potential
to optimise other areas like importing.
Signed-off-by: Damien George <damien@micropython.org>
Without this it's possible to get a compiler error about the comparison
always being true, because MP_BINARY_OP_LESS is 0. And it seems that gcc
optimises these 6 equality comparisons into the same size machine code as
before.
In @micropython.native code the types of variables and expressions are
always Python objects, so they can be initialised as such. This prevents
problems with compiling optimised code like while-loops where a local may
be referenced before it is assigned to.
Signed-off-by: Damien George <damien@micropython.org>
This new logic tracks when an unconditional jump/raise occurs in the
emitted code stream (bytecode or native machine code) and suppresses all
subsequent code, until a label is assigned. This eliminates a lot of
cases of dead code, with relatively simple logic.
This commit combined with the previous one (that removed the existing
dead-code finding logic) has the following code size change:
bare-arm: -16 -0.028%
minimal x86: -60 -0.036%
unix x64: -368 -0.070%
unix nanbox: -80 -0.017%
stm32: -204 -0.052% PYBV10
cc3200: +0 +0.000%
esp8266: -232 -0.033% GENERIC
esp32: -224 -0.015% GENERIC[incl -40(data)]
mimxrt: -192 -0.054% TEENSY40
renesas-ra: -200 -0.032% RA6M2_EK
nrf: +28 +0.015% pca10040
rp2: -256 -0.050% PICO
samd: -12 -0.009% ADAFRUIT_ITSYBITSY_M4_EXPRESS
Signed-off-by: Damien George <damien@micropython.org>
This commit adjusts the asm_thumb_xxx functions so they can be dynamically
configured to use ARMv7-M instructions or not. This is available when
MICROPY_DYNAMIC_COMPILER is enabled, and then controlled by the value of
mp_dynamic_compiler.native_arch.
If MICROPY_DYNAMIC_COMPILER is disabled the previous behaviour is retained:
the functions emit ARMv7-M instructions only if MICROPY_EMIT_THUMB_ARMV7M
is enabled.
Signed-off-by: Damien George <damien@micropython.org>
This eliminates the need to save and restore the exception unwind handler
pointer when calling nlr_push.
Signed-off-by: Damien George <damien@micropython.org>
The recent rework of bytecode made all constants global with respect to the
module (previously, each function had its own constant table). That means
the constant table for a module is shared among all functions/methods/etc
within the module.
This commit add support to the compiler to de-duplicate constants in this
module constant table. So if a constant is used more than once -- eg 1.0
or (None, None) -- then the same object is reused for all instances.
For example, if there is code like `print(1.0, 1.0)` then the parser will
create two independent constants 1.0 and 1.0. The compiler will then (with
this commit) notice they are the same and only put one of them in the
constant table. The bytecode will then reuse that constant twice in the
print expression. That allows the second 1.0 to be reclaimed by the GC,
also means the constant table has one less entry so saves a word.
Signed-off-by: Damien George <damien@micropython.org>
Some architectures (like esp32 xtensa) cannot read byte-wise from
executable memory. This means the prelude for native functions -- which is
usually located after the machine code for the native function -- must be
placed in separate memory that can be read byte-wise. Prior to this commit
this was achieved by enabling N_PRELUDE_AS_BYTES_OBJ for the emitter and
MICROPY_EMIT_NATIVE_PRELUDE_AS_BYTES_OBJ for the runtime. The prelude was
then placed in a bytes object, pointed to by the module's constant table.
This behaviour is changed by this commit so that a pointer to the prelude
is stored either in mp_obj_fun_bc_t.child_table, or in
mp_obj_fun_bc_t.child_table[num_children] if num_children > 0. The reasons
for doing this are:
1. It decouples the native emitter from runtime requirements, the emitted
code no longer needs to know if the system it runs on can/can't read
byte-wise from executable memory.
2. It makes all ports have the same emitter behaviour, there is no longer
the N_PRELUDE_AS_BYTES_OBJ option.
3. The module's constant table is now used only for actual constants in the
Python code. This allows further optimisations to be done with the
constants (eg constant deduplication).
Code size change for those ports that enable the native emitter:
unix x64: +80 +0.015%
stm32: +24 +0.004% PYBV10
esp8266: +88 +0.013% GENERIC
esp32: -20 -0.002% GENERIC[incl -112(data)]
rp2: +32 +0.005% PICO
Signed-off-by: Damien George <damien@micropython.org>
mpy-cross will now generate native code based on the size of
mp_code_state_native_t, and the runtime will use this struct to calculate
the offset of the .state field. This makes native code generation and
execution (which rely on this struct) independent to the settings
MICROPY_STACKLESS and MICROPY_PY_SYS_SETTRACE, both of which change the
size of the mp_code_state_t struct.
Fixes issue #5059.
Signed-off-by: Damien George <damien@micropython.org>
This reverts commit 7e8222ae06.
The prelude data must exist somewhere in the native code so load_raw_code
and mpy-tool.py can access and parse it.
Signed-off-by: Damien George <damien@micropython.org>
This is a partial implementation of PEP 448 to allow multiple ** unpackings
when calling a function or method.
The compiler is modified to encode the argument as a None: obj key-value
pair (similar to how regular keyword arguments are encoded as str: obj
pairs). The extra object that was pushed on the stack to hold a single **
unpacking object is no longer used and is removed.
The runtime is modified to decode this new format.
Signed-off-by: David Lechner <david@pybricks.com>
This commit introduces changes:
- All jump opcodes are changed to have variable length arguments, of either
1 or 2 bytes (previously they were fixed at 2 bytes). In most cases only
1 byte is needed to encode the short jump offset, saving bytecode size.
- The bytecode emitter now selects 1 byte jump arguments when the jump
offset is guaranteed to fit in 1 byte. This is achieved by checking if
the code size changed during the last pass and, if it did (if it shrank),
then requesting that the compiler make another pass to get the correct
offsets of the now-smaller code. This can continue multiple times until
the code stabilises. The code can only ever shrink so this iteration is
guaranteed to complete. In most cases no extra passes are needed, the
original 4 passes are enough to get it right by the 4th pass (because the
2nd pass computes roughly the correct labels and the 3rd pass computes
the correct size for the jump argument).
This change to the jump opcode encoding reduces .mpy files and RAM usage
(when bytecode is in RAM) by about 2% on average.
The performance of the VM is not impacted, at least within measurment of
the performance benchmark suite.
Code size is reduced for builds that include a decent amount of frozen
bytecode. ARM Cortex-M builds without any frozen code increase by about
350 bytes.
Signed-off-by: Damien George <damien@micropython.org>
Background: .mpy files are precompiled .py files, built using mpy-cross,
that contain compiled bytecode functions (and can also contain machine
code). The benefit of using an .mpy file over a .py file is that they are
faster to import and take less memory when importing. They are also
smaller on disk.
But the real benefit of .mpy files comes when they are frozen into the
firmware. This is done by loading the .mpy file during compilation of the
firmware and turning it into a set of big C data structures (the job of
mpy-tool.py), which are then compiled and downloaded into the ROM of a
device. These C data structures can be executed in-place, ie directly from
ROM. This makes importing even faster because there is very little to do,
and also means such frozen modules take up much less RAM (because their
bytecode stays in ROM).
The downside of frozen code is that it requires recompiling and reflashing
the entire firmware. This can be a big barrier to entry, slows down
development time, and makes it harder to do OTA updates of frozen code
(because the whole firmware must be updated).
This commit attempts to solve this problem by providing a solution that
sits between loading .mpy files into RAM and freezing them into the
firmware. The .mpy file format has been reworked so that it consists of
data and bytecode which is mostly static and ready to run in-place. If
these new .mpy files are located in flash/ROM which is memory addressable,
the .mpy file can be executed (mostly) in-place.
With this approach there is still a small amount of unpacking and linking
of the .mpy file that needs to be done when it's imported, but it's still
much better than loading an .mpy from disk into RAM (although not as good
as freezing .mpy files into the firmware).
The main trick to make static .mpy files is to adjust the bytecode so any
qstrs that it references now go through a lookup table to convert from
local qstr number in the module to global qstr number in the firmware.
That means the bytecode does not need linking/rewriting of qstrs when it's
loaded. Instead only a small qstr table needs to be built (and put in RAM)
at import time. This means the bytecode itself is static/constant and can
be used directly if it's in addressable memory. Also the qstr string data
in the .mpy file, and some constant object data, can be used directly.
Note that the qstr table is global to the module (ie not per function).
In more detail, in the VM what used to be (schematically):
qst = DECODE_QSTR_VALUE;
is now (schematically):
idx = DECODE_QSTR_INDEX;
qst = qstr_table[idx];
That allows the bytecode to be fixed at compile time and not need
relinking/rewriting of the qstr values. Only qstr_table needs to be linked
when the .mpy is loaded.
Incidentally, this helps to reduce the size of bytecode because what used
to be 2-byte qstr values in the bytecode are now (mostly) 1-byte indices.
If the module uses the same qstr more than two times then the bytecode is
smaller than before.
The following changes are measured for this commit compared to the
previous (the baseline):
- average 7%-9% reduction in size of .mpy files
- frozen code size is reduced by about 5%-7%
- importing .py files uses about 5% less RAM in total
- importing .mpy files uses about 4% less RAM in total
- importing .py and .mpy files takes about the same time as before
The qstr indirection in the bytecode has only a small impact on VM
performance. For stm32 on PYBv1.0 the performance change of this commit
is:
diff of scores (higher is better)
N=100 M=100 baseline -> this-commit diff diff% (error%)
bm_chaos.py 371.07 -> 357.39 : -13.68 = -3.687% (+/-0.02%)
bm_fannkuch.py 78.72 -> 77.49 : -1.23 = -1.563% (+/-0.01%)
bm_fft.py 2591.73 -> 2539.28 : -52.45 = -2.024% (+/-0.00%)
bm_float.py 6034.93 -> 5908.30 : -126.63 = -2.098% (+/-0.01%)
bm_hexiom.py 48.96 -> 47.93 : -1.03 = -2.104% (+/-0.00%)
bm_nqueens.py 4510.63 -> 4459.94 : -50.69 = -1.124% (+/-0.00%)
bm_pidigits.py 650.28 -> 644.96 : -5.32 = -0.818% (+/-0.23%)
core_import_mpy_multi.py 564.77 -> 581.49 : +16.72 = +2.960% (+/-0.01%)
core_import_mpy_single.py 68.67 -> 67.16 : -1.51 = -2.199% (+/-0.01%)
core_qstr.py 64.16 -> 64.12 : -0.04 = -0.062% (+/-0.00%)
core_yield_from.py 362.58 -> 354.50 : -8.08 = -2.228% (+/-0.00%)
misc_aes.py 429.69 -> 405.59 : -24.10 = -5.609% (+/-0.01%)
misc_mandel.py 3485.13 -> 3416.51 : -68.62 = -1.969% (+/-0.00%)
misc_pystone.py 2496.53 -> 2405.56 : -90.97 = -3.644% (+/-0.01%)
misc_raytrace.py 381.47 -> 374.01 : -7.46 = -1.956% (+/-0.01%)
viper_call0.py 576.73 -> 572.49 : -4.24 = -0.735% (+/-0.04%)
viper_call1a.py 550.37 -> 546.21 : -4.16 = -0.756% (+/-0.09%)
viper_call1b.py 438.23 -> 435.68 : -2.55 = -0.582% (+/-0.06%)
viper_call1c.py 442.84 -> 440.04 : -2.80 = -0.632% (+/-0.08%)
viper_call2a.py 536.31 -> 532.35 : -3.96 = -0.738% (+/-0.06%)
viper_call2b.py 382.34 -> 377.07 : -5.27 = -1.378% (+/-0.03%)
And for unix on x64:
diff of scores (higher is better)
N=2000 M=2000 baseline -> this-commit diff diff% (error%)
bm_chaos.py 13594.20 -> 13073.84 : -520.36 = -3.828% (+/-5.44%)
bm_fannkuch.py 60.63 -> 59.58 : -1.05 = -1.732% (+/-3.01%)
bm_fft.py 112009.15 -> 111603.32 : -405.83 = -0.362% (+/-4.03%)
bm_float.py 246202.55 -> 247923.81 : +1721.26 = +0.699% (+/-2.79%)
bm_hexiom.py 615.65 -> 617.21 : +1.56 = +0.253% (+/-1.64%)
bm_nqueens.py 215807.95 -> 215600.96 : -206.99 = -0.096% (+/-3.52%)
bm_pidigits.py 8246.74 -> 8422.82 : +176.08 = +2.135% (+/-3.64%)
misc_aes.py 16133.00 -> 16452.74 : +319.74 = +1.982% (+/-1.50%)
misc_mandel.py 128146.69 -> 130796.43 : +2649.74 = +2.068% (+/-3.18%)
misc_pystone.py 83811.49 -> 83124.85 : -686.64 = -0.819% (+/-1.03%)
misc_raytrace.py 21688.02 -> 21385.10 : -302.92 = -1.397% (+/-3.20%)
The code size change is (firmware with a lot of frozen code benefits the
most):
bare-arm: +396 +0.697%
minimal x86: +1595 +0.979% [incl +32(data)]
unix x64: +2408 +0.470% [incl +800(data)]
unix nanbox: +1396 +0.309% [incl -96(data)]
stm32: -1256 -0.318% PYBV10
cc3200: +288 +0.157%
esp8266: -260 -0.037% GENERIC
esp32: -216 -0.014% GENERIC[incl -1072(data)]
nrf: +116 +0.067% pca10040
rp2: -664 -0.135% PICO
samd: +844 +0.607% ADAFRUIT_ITSYBITSY_M4_EXPRESS
As part of this change the .mpy file format version is bumped to version 6.
And mpy-tool.py has been improved to provide a good visualisation of the
contents of .mpy files.
In summary: this commit changes the bytecode to use qstr indirection, and
reworks the .mpy file format to be simpler and allow .mpy files to be
executed in-place. Performance is not impacted too much. Eventually it
will be possible to store such .mpy files in a linear, read-only, memory-
mappable filesystem so they can be executed from flash/ROM. This will
essentially be able to replace frozen code for most applications.
Signed-off-by: Damien George <damien@micropython.org>
uint types in viper mode can now be used for all binary operators except
floor-divide and modulo.
Fixes issue #1847 and issue #6177.
Signed-off-by: Damien George <damien@micropython.org>