Automated Logic Validation
Imagine a way to run automated, unattended logic validation tests on a control system, and to get validation documentation back when the tests are completed. This is our automated logic validation service: a repeatable, consistent, and fully automated method to conduct a thorough validation of control logic – and generate the associated documentation – on a periodic basis.
Test Compiler
The Test Compiler is a utility developed by Cape Software that generates test scenarios from data entered in an Excel file. These scenarios are then read and executed in VP Link to perform automated, unattended control system logic validation. The Test Compiler also generates a detailed, html-based test plan (a step-by-step procedure of how the test is conducted). Then, when the test scenarios are ran in VP Link, test results are generated as a log file that reports what was tested and how it performed.
With these test scenarios, validation can be done as if someone were sitting at the simulation running tests:
- A specified group of inputs are forced in the controller
- Corresponding outputs are read back from the controller
- Output values from the controller are compared to expected values, and the comparison results are written to a log file
But no one need be sitting at the simulation. This is all accomplished without human intervention.
Tests are developed from test plans or cause and effect charts which we convert into a spreadsheet (Excel) format. The Test Compiler takes that spreadsheet file as input and generates VP Link test scenarios and html test plan docs. The scenarios will then run in VP Link and generate a log file for each test. The html format documents are conducive for posting everything in an intranet environment to allow internal network access.
Features and Benefits
- Testing is completely automated and reproducible
– Eliminates human error in testing
– Significantly reduces time required to complete testing
– Enables more intensive and thorough testing procedures
– Satisfies TÜV guidelines - Test Procedures and Results are Documented
– Reduces error in plant documentation
– Security checksums ensure documents have not been manually modified
– Satisfies a host of ANSI/ISA, OSHA, BESSE, and IEC guidelines
Documentation
Test Log
When a scenario runs in VPLink, each time outputs are verified a statement is written to a log file, so the log file is a text document that describes how the test ran.
Below is an excerpt from the log file generated when a scenario ran. The FAILED statement (on the second transmitter trip) is included to demonstrate what gets logged in the file when a statement fails a validation.
00:00:00.000 Starting test ‘C:\CAPESIM\ZFT101.SCE’ at Mon Mar 12 13:52:05 2007
00:00:00.000 # …
00:00:00.000 # … SET INPUTS TO A NORMAL OPERATING STATE.
00:00:09.073 # TEST AT LINE 7 OF ‘ZFT101.SCE’ ON ZPB101R
00:00:09.073 Verification <ZXV101A> = 1.000000 passed Value is 1.000000
00:00:09.073 Verification <ZXV101B> = 0.000000 passed Value is 0.000000
00:00:09.073 Verification <ZFT101A> = 55.000000 passed Value is 55.000000
00:00:09.073 Verification <ZFT101B> = 55.000000 passed Value is 55.000000
00:00:09.073 Verification <ZFT101C> = 55.000000 passed Value is 55.000000
00:00:09.073 Verification <ZPT101A> = 225.000000 passed Value is 225.000000
00:00:09.073 Verification <ZPT101B> = 225.000000 passed Value is 225.000000
00:00:09.073 Verification <ZPT101C> = 225.000000 passed Value is 225.000000
00:00:09.073 Verification <ZTT101A> = 245.000000 passed Value is 245.000000
00:00:09.073 Verification <ZTT101B> = 245.000000 passed Value is 245.000000
00:00:09.073 Verification <ZTT101C> = 245.000000 passed Value is 245.000000
00:00:15.091 # …
00:00:15.091 # …FT101 2OO3 TEST
00:00:15.091 # … FIRST TRANSMITTER TEST
00:00:21.120 # TEST AT LINE 12 OF ‘ZFT101.SCE’ SET ZFT101A = 48.000000
00:00:21.120 Verification <ZXV101A> = 1.000000 passed Value is 1.000000
00:00:21.120 Verification <ZXV101B> = 0.000000 passed Value is 0.000000
00:00:21.120 Verification <ZFT101A> = 48.000000 passed Value is 48.000000
00:00:21.120 Verification <ZFT101B> = 55.000000 passed Value is 55.000000
00:00:21.120 Verification <ZFT101C> = 55.000000 passed Value is 55.000000
00:00:21.120 # …
00:00:21.130 # … SECOND TRANSMITTER TRIP
00:00:27.139 # TEST AT LINE 15 OF ‘ZFT101.SCE’ SET ZFT101B = 48.000000
00:00:27.139 Verification <ZXV101A> = 0.000000 passed Value is 0.000000
00:00:27.139 **Verification <ZXV101B> = 0.000000 FAILED Value is 1.000000
00:00:27.139 Verification <ZFT101A> = 48.000000 passed Value is 48.000000
00:00:27.139 Verification <ZFT101B> = 48.000000 passed Value is 48.000000
00:00:27.139 Verification <ZFT101C> = 55.000000 passed Value is 55.000000
00:00:27.139 # …
00:00:27.139 # … SET TRANSMITTERS BACK TO NORMAL
00:00:33.157 # TEST AT LINE 18 OF ‘ZFT101.SCE’ SET ZFT101A,… = 55.000000
00:00:33.157 Verification <ZXV101A> = 0.000000 passed Value is 0.000000
00:00:33.157 Verification <ZXV101B> = 0.000000 passed Value is 0.000000
00:00:33.157 Verification <ZFT101A> = 55.000000 passed Value is 55.000000
00:00:33.157 Verification <ZFT101B> = 55.000000 passed Value is 55.000000
00:00:33.157 Verification <ZFT101C> = 55.000000 passed Value is 55.000000
00:00:39.166 # TEST AT LINE 19 OF ‘ZFT101.SCE’ ON ZPB101R
00:00:39.166 Verification <ZXV101A> = 1.000000 passed Value is 1.000000
00:00:39.166 Verification <ZXV101B> = 0.000000 passed Value is 0.000000
The numeric entry on the left is the elapsed time since the scenario started. For example, the FAILED statement occurred 27 seconds after the test started.
Test Log Summary
A test log summary html document is generated that displays an overview of tests and performance. Information includes the test name, duration of the test (how long it took to run), performance statistics, and the source document the test scenario was generated from. If there are any FAILED statements in a test, the Ver_Failed cell for that test will report the number of failed statements with a yellow background highlight on that cell, and that test will be moved to the top of the html doc. So all failed tests are listed before tests that ran successfully, to make identifying failed tests easier.