This commit factors existing code in `run-tests.py` into a new helper
function `create_test_report()`. That function prints out a summary of the
test run (eg number of tests passed, number failed, number skipped) and
creates the corresponding `_results.json` file.
This is done so `create_test_report()` can be reused by the other test
runners.
The `test_count` counter is now gone, and instead the number of passed plus
number of failed tests is used as an equivalent count.
For consistency this commit makes a minor change to the printed output of
`run-tests.py`: instead of printing a shorthand name for tests that failed
or skipped, it now prints the full name. Eg what was previously printed as
`attrtuple2` is now printed as `basics/attrtuple2.py`. This makes the
output a little longer (when there are failed/skipped tests) but helps to
disambiguate the test name, eg which directory it's in.
Signed-off-by: Damien George <damien@micropython.org>
This commit lets the test runner enumerate and run native tests if the
feature check fails but native tests were explicitly requested from the
command line.
The old behaviour would disable native tests anyway if the feature check
failed, however this hid a bug in the x86 native emitter that would be
triggered even during the feature check. That meant the test suite
would pass on x86 even with a broken emitter, as those tests would have
been skipped anyway.
Now, if the user asks for native code it will get native code out of the
runner no matter what.
Co-authored-by: Damien George <damien@micropython.org>
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
Some tests are just too big for targets that don't have much heap memory,
eg `tests/extmod/vfs_rom.py`. Other tests are too large because the target
doesn't have enough IRAM for native code, eg esp8266 running
`tests/micropython/viper_args.py`.
Previously, such tests were explicitly skipped on targets known to have
little memory, eg esp8266. But this doesn't scale to multiple targets, nor
to more and more tests which are too large.
This commit addresses that by adding logic to the test runner so it can
automatically skip tests when they don't fit in the target's memory. It
does this by prepending a `print('START TEST')` to every test, and if a
`MemoryError` occurs before that line is printed then the test was too big.
This works for standard tests, tests that go via .mpy files, and tests that
run in native emitter mode via .mpy files.
For tests that are too big, it prints `lrge <test name>` on the output,
and at the end prints them on a separate line of skipped tests so they can
be distinguished. They are also distinguished in the `_result.json` file
as a skipped test with reason "too large".
Signed-off-by: Damien George <damien@micropython.org>
The `_results.json` output of `run-tests.py` was recently changed in
7a55cb6b36 to add a list of passed and
skipped tests.
The way this was done turned out to be not general enough, because we want
to add another type of result, namely tests that are skipped because they
are too large.
Instead of having separate lists in `_results.json` for each kind of result
(pass, fail, skip, skip too large, etc), this commit changes the output
form of `_results.json` so that it stores a single list of 3-tuples of all
tests that were run:
[(test_name, result, reason), ...]
That's more general and allows adding a reason for skipped and failed
tests. At the moment this reason is just an empty string, but can be
improved in the future.
Signed-off-by: Damien George <damien@micropython.org>
The output `_result.json` file generated by `run-tests.py` currently
contains a list of failed tests. This commit adds to the output a list of
passed and skipped tests, and so now provides full information about which
tests were run and what their results were.
Signed-off-by: Damien George <damien@micropython.org>
This commit fixes three open issues related to the asyncio scheduler
exiting prematurely when the main task queue is empty, in cases where
CPython would not exit (for example, because the main task is not done
because it's on a different queue).
In the first case, the scheduler exits because running a task via
`run_until_complete` did not schedule any dependent tasks.
In the other two cases, the scheduler exits because the tasks are queued in
an event queue.
Tests have been added which reproduce the original issues. These test
cases document the unauthorized use of `Event.set()` from a soft IRQ, and
are skipped in unsupported environments (webassembly and native emitter).
Fixes issues #16759, #16569 and #16318.
Signed-off-by: Yoctopuce dev <dev@yoctopuce.com>
Allocation of a large compression window may fail, and in that case keep
the `DeflateIO` state consistent so its other methods (such as `close()`)
still work. Consistency is kept by only updating the `self->write` member
if the window allocation succeeds.
Thanks to @jimmo for finding the bug.
Signed-off-by: Damien George <damien@micropython.org>
This won't be generated normally, but a failed run (for example, from a
unittest with an error or which doesn't call unittest.main()) will
generate one.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
When using unittest (for example) with injected mpy files, not only does
the name of the main test module need to be `__main__`, but also the
`__main__` module should correspond to this injected module. Otherwise the
unittest test won't be detected.
Signed-off-by: Damien George <damien@micropython.org>
This commit implements a method to detect at runtime if inline assembler
support is enabled, and if so which platform it targets.
This allows clean test runs even on modified version of ARM-based ports
where inline assembler support is disabled, running inline assembler tests
on ports that have such feature not enabled by default and manually
enabled, and allows to always run the correct inlineasm tests for ports
that support more than one architecture (esp32, qemu, rp2).
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit adds support for writing inline assembler functions when
targeting a RV32IMC processor.
Given that this takes up a bit of rodata space due to its large
instruction decoding table and its extensive error messages, it is
enabled by default only on offline targets such as mpy-cross and the
qemu port.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
Thumb/Thumb2 tests are now into their own subdirectory, as
RV32IMC-specific tests will be added as part of the RV32 inline
assembler support.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
A return value of 0 from Python-level `ioctl()` means success, but if
that's returned unconditionally it means that the method supports all
ioctl calls, which is not true. Returning 0 without doing anything can
potentially lead to a crash, eg for MP_STREAM_SEEK which requires returning
a value in the passed-in struct pointer.
This commit makes it so that all `ioctl()` methods respond only to
MP_STREAM_CLOSE, ie they return -1 (indicating error) for all other ioctl
calls.
Signed-off-by: Damien George <damien@micropython.org>
Running unittest-based tests with --via-mpy is currently broken, because
the unittest test needs the module to be named `__main__`, whereas it's
actually called `__injected_test`.
Fix this by changing the name when the file is opened.
Signed-off-by: Damien George <damien@micropython.org>
So that a failing unittest-based test has its entire log printed when using
`run-tests.py --print-failures`.
Signed-off-by: Damien George <damien@micropython.org>
All the existing tests require a .exp file (either manually specified or
generated running the test first under CPython) that is used to check the
output of running the test under MicroPython. The test passes if the
output matches the expected output exactly.
This has worked very well for a long time now. But some of the newer
hardware tests (eg UART, SPI, PWM) don't really fit this model, for the
following main reasons:
- Some but not all parts of the test should be skipped on certain hardware
targets. With the expected-output approach, skipping tests is either all
or nothing.
- It's often useful to output diagnostics as part of the test, which should
not affect the result of the test (eg the diagnostics change from run to
run, like timing values, or from target to target).
- Sometimes a test will do a complex check and then print False/True if it
passed or not, which obscures the actual test result.
To improve upon this, this commit adds support to `run-tests.py` for a test
to use `unittest`. It detects this by looking at the end of the output
after running the test, looking for the test summary printed by `unittest`
(or an error message saying `unittest` was not found). If the test uses
`unittest` then it should not have a .exp file, and it's not run under
CPython. A `unittest` based test passes or fails based on the summary
printed by `unittest`.
Note that (as long as `unittest` is installed on the target) the tests are
still fully independent and you can still run them without `run-tests.py`:
you just run it as usual, eg `mpremote run <test.py>`. This is very useful
when creating and debugging tests.
Note also that the standard test suite testing Python semantics (eg
everything in `tests/basics/`) will probably never use unittest. Only more
advanced tests will, and ones that are not runnable under CPython.
Signed-off-by: Damien George <damien@micropython.org>
Previously to this commit, running the test suite on a bare-metal board
required specifying the target (really platform) and device, eg:
$ ./run-tests.py --target pyboard --device /dev/ttyACM1
That's quite a lot to type, and you also need to know what the target
platform is, when a lot of the time you either don't care or it doesn't
matter.
This commit makes it easier to run the tests by replacing both of these
options with a single `--test-instance` (`-t` for short) option. That
option specifies the executable/port/device to test. Then the target
platform is automatically detected.
The `--test-instance` can be passed:
- "unix" (the default) to use the unix version of MicroPython
- "webassembly" to test the webassembly port
- anything else is considered a port/device to pass to Pyboard
There are also some shortcuts to specify a port/device, following
`mpremote`:
- a<n> is short for /dev/ttyACM<n>
- u<n> is short for /dev/ttyUSB<n>
- c<n> is short for COM<n>
For example:
$ ./run-tests.py -t a1
Note that the default test instance is "unix" and so this commit does not
change the standard way to run tests on the unix port, by just doing
`./run-tests.py`.
As part of this change, the platform (and it's native architecture if it
supports importing native .mpy files) is show at the start of the test run.
Signed-off-by: Damien George <damien@micropython.org>
Commit 69c25ea865 made raising `SystemExit`
do a soft reset (on bare-metal targets). This means that any test which is
skipped by a target (by raising `SystemExit`) will trigger a soft reset on
that target, and then it must execute its startup code, such as `boot.py`.
If the timing is right, this startup code can be unintentionally
interrupted by the test runner when preparing the next test, because the
test runner enters the raw REPL again via a Ctrl-C Ctrl-A ctrl-D sequence
(in `Pyboard.enter_raw_repl()`).
When this happens (`boot.py` is interrupted) the target may not be set up
correctly, and it may (in the case of stm32 boards) flash LEDs and take
extra time, slowing down the test run.
Fix this by explicitly waiting for the target to finish its soft reset when
it skips a test.
Signed-off-by: Damien George <damien@micropython.org>
Removing the now-unused (see previous commit for details) `--write-exp` and
`--list-tests` options helps to simplify the rather complex logic in
`run-tests.py`.
Signed-off-by: Damien George <damien@micropython.org>
So that certain tests can be skipped when running on this target. These
thread tests do not pass because the zephyr port cannot create more than 4
threads at once.
Signed-off-by: Damien George <damien@micropython.org>
Now that some ports support multiple architectures (eg esp32 has both
Xtensa and RISC-V CPUs) it's no longer possible to set mpy-cross flags
based on the target, eg `./run-tests.py --target esp32`. Instead this
commit makes it so the `-march=xxx` argument to mpy-cross is detected
automatically via evaluation of `sys.implementation._mpy`.
Signed-off-by: Damien George <damien@micropython.org>
Currently, the qemu-arm (and qemu-riscv) port has two build modes:
- a simple test that executes a Python string; and
- a full test that uses tinytest to embed all tests within the firmware,
then executes that and captures the output.
This is very different to all the other ports. A difficulty with using
tinytest is that with the large number of tests the firmware overflows its
virtual flash size. It's also hard to run tests via .mpy files and with
the native emitter. Being different to the other ports also means an extra
burden on maintenance.
This commit reworks the qemu-arm port so that it has a single build target
that creates a standard firmware which has a REPL. When run under
qemu-system-arm, the REPL acts like any other bare-metal port, complete
with soft reset (use machine.reset() to turn it off and exit
qemu-system-arm).
This approach gives many benefits:
- allows playing with a REPL without hardware;
- allows running the test suite as it would on a bare-metal board, by
making qemu-system-arm redirect the UART serial of the virtual device to
a /dev/pts/xx file, and then running run-tests.py against that serial
device;
- skipping tests is now done via the logic in `run-tests.py` and no longer
needs multiple places to define which tests to skip
(`tools/tinytest-codegen.py`, `ports/qemu-arm/tests_profile.txt` and also
`tests/run-tests.py`);
- allows testing/using mpremote with the qemu-arm port.
Eventually the qemu-riscv port would have a similar change.
Prior to this commit the test results were:
743 tests ok. (121 skipped)
With this commit the test results are:
753 tests performed (22673 individual testcases)
753 tests passed
138 tests skipped
More tests are skipped because more are included in the run. But overall
more tests pass.
Signed-off-by: Damien George <damien@micropython.org>
Install the mingw variant of Python since it behaves more like a 'real'
Windows CPython than the msys2 variant: os.name == 'nt', not 'posix'. Note
that os.sep is still '/' though so we don't actually need to skip the
import_file test. This way one single Python version can be used both for
running run-tests.py and getting the expected test output.
Signed-off-by: stijn <stijn@ignitron.net>
Before the fix in parent commit, some of these tests hung indefinitely.
After, they seem to consistently pass.
This work was funded through GitHub Sponsors.
Signed-off-by: Angus Gratton <angus@redyak.com.au>
The code generating the entry to the finally handler of an async-with
statement was simply wrong for the case of the native emitter. Among other
things the layout of the stack was incorrect.
This is fixed by this commit. The setup of the async-with finally handler
is now put in a dedicated emit function, for both the bytecode and native
emitters to implement in their own way (the bytecode emitter is unchanged,
just factored to a function).
With this fix all of the async-with tests now work when using the native
emitter.
Signed-off-by: Damien George <damien@micropython.org>
A value thrown/injected into a native generator needs to be stored in a
dedicated variable outside `nlr_buf_t`, following the `inject_exc` variable
in `py/vm.c`.
Signed-off-by: Damien George <damien@micropython.org>
This adds a QEMU-based bare metal RISC-V 32 bits port. For the time being
only QEMU's "virt" 32 bits board is supported, using the ilp32 ABI and the
RV32IMC architecture.
The top-level README and the run-tests.py files are updated for this new
port.
Signed-off-by: Alessandro Gatti <a.gatti@frob.it>
This commit adds a significant portion of the existing MicroPython asyncio
module to the webassembly port, using parts of the existing asyncio code
and some custom JavaScript parts.
The key difference to the standard asyncio is that this version uses the
JavaScript runtime to do the actual scheduling and waiting on events, eg
Promise fulfillment, timeouts, fetching URLs.
This implementation does not include asyncio.run(). Instead one just uses
asyncio.create_task(..) to start tasks and then returns to the JavaScript.
Then JavaScript will run the tasks.
The implementation here tries to reuse as much existing asyncio code as
possible, and gets all the semantics correct for things like cancellation
and asyncio.wait_for. An alternative approach would reimplement Task,
Event, etc using JavaScript Promise's. That approach is very difficult to
get right when trying to implement cancellation (because it's not possible
to cancel a JavaScript Promise).
Signed-off-by: Damien George <damien@micropython.org>
This allows running tests with a .js/.mjs suffix, and also .py tests using
node and the webassembly port.
Signed-off-by: Damien George <damien@micropython.org>
This is now easy to support, since the first machine-word of a native
function tells how to find the prelude, from which the function name can be
extracted in the same way as for bytecode.
Signed-off-by: Damien George <damien@micropython.org>
By moving to GitHub actions, all MicroPython CI builds are now on GitHub
actions. This allows faster parallel builds and saves time by not building
when no relevant files changed.
This reveals a few failing tests, so those are temporarily disabled until
they can be fixed.
Signed-off-by: David Lechner <david@pybricks.com>
Signed-off-by: Damien George <damien@micropython.org>