問題描述
如何在嵌入式系統中執行回歸測試 (How to perform regression tests in embedded systems)
What good practices and strategies are there for running regression tests in embedded environments or in other situations where the possibility to automate tests is very limited.
In my experience a lot of the testing has to be performed manually i.e. a tester needs to push a sequence of buttons and verify that the machine behaves correctly. As a developer it is really hard to assure yourself that your changes don't break something else.
Without proper regression tests the situation gets even worse during big refactorings and such.
Does anyone recognize the problem? Did you find a good solution or process to deal with this kind of problem?
‑‑‑‑‑
參考解法
方法 1:
Personally, I'm a big fan of having my embedded code compile on both the target hardware and my own computer. For example, when targeting an 8086, I included both an entry point that maps to reset on the 8086 hardware and a DOS entry point. The hardware was designed so all IO was memory mapped. I then conditionally compiled in a hardware simulator and conditionally changed the hardware memory locations to simulated hardware memory.
If I were to work on a non‑x86 platform, I'd probably write an emulator instead.
Another approach is to create a test rig where all the inputs and outputs for the hardware are controlled through software. We use this a lot in factory testing.
One time we built a simulator into the IO hardware. That way the rest of the system could be tested by sending a few commands over CAN to put the hardware into simulated mode. Similarly, well‑factored software could have a "simulated mode" where the IO is simulated in response to software commands.
方法 2:
For embedded testing, I would suggest that you design your way out of this very early in the development process. Sandboxing your embedded code to run on a PC platform helps a lot, and then do mocking afterwards :)
This will ensure integrety for the most of it, but you would still need to do system and acceptance testing manually later on.
方法 3:
Does anyone recognize the problem?
Most definitely.
Did you find a good solution or process to deal with this kind of problem?
A combination of techniques:
- Automated tests;
- Brute‑force tests, i.e. ones which aren't as intelligent as automated tests, but which repeatedly test a feature over a long period (hours or days), and can be left to run without human intervention;
- Manual tests (often hard to avoid);
- Testing on a software emulator on a PC (or as a last resort, a hardware emulator).
With regard to compiling on a PC compiler: that would certainly make sense for high‑level modules, and for low‑level modules with a test suitable harness.
When it comes to, for example, parts of the code which have to deal with real‑time signals from multiple sources, emulation is a good place to start, but I don't think it is sufficient. There is often no substitute for testing the code on the actual hardware, in as realistic an environment as possible.
方法 4:
Unlike most responders so far, I work with embedded environments that do not resemble desktop systems at all, and therefore cannot emulate the embedded system on the desktop.
In order to write good testing systems, you need your test system to have feedforward and feedback. JTAG is the most common feed‑forward way to control the device. You can set the complete state of the device (perhaps even the entire board if you're lucky) and then set the test code to run. At which point you get your feedback. JTAG can also serve as a feedback device. However, a logic analyzer with an software API is the best in this situation. You can look for certain levels on pins, count pulses and even parse data streams from streaming peripherals.
方法 5:
Provide test harnesses / sandboxes / mockups for individual subsystems, and for the entire project, that emulate the target environment.
This does not remove the need for tests in the real environment, but greatly reduces their number as the simulation will catch most problems so by the time they all pass and you perform the expensive human‑driven test you are reasonably confident you will pass that first time.
(by sris、Paul、cwap、Steve Melnikoff、dwhall、moonshadow)