With recent updates, partcl can now use the tcltest.tcl library that comes with tcl 8.5.4 [1]
For some time, partcl has been limping along with our hand-rolled Test::More analog on a slightly processed version of tcl's .test files [2]. This generated TAP output, and was amenable to running via the standard perl testing tools.
The current version of 'make spectest' is processing the raw .test files from the CVS repository. [3] tcl's test output format isn't TAP [4] but it's easily understood: output is bad. Here's a clean run with partcl of a single test file.
$ ./tclsh t_tcl/llength.test
llength.test: Total 6 Passed 6 Skipped 0 Failed 0
Here's some output from a failing test for comparison - we haven't implemented [case] in partcl because it's deprecated, so our failure mode here doesn't match the spec.
==== case-3.3 single-argument form for pattern/command pairs FAILED
==== Contents of test case:
list [catch {case z in {a 2 b}} msg] $msg
---- Result was:
1 {invalid command name "case"}
---- Result should have been (exact matching):
1 {extra case pattern with no body}
==== case-3.3 FAILED
It will also report on differences in return value, making it very obvious what needs fixing from the development side. For example, This allowed me to easily update some exceptions being thrown by partcl that were using a default parrot type instead of the specific one that corresponds to tcl's [error].
I've checked a file into the repository to track the progress of the suite. This is analogous to the file rakudo (Perl 6 on parrot) is using.
"date","revision","files","test","pass","fail","skip"
"2008-09-25 00:00",31396,38,1481,743,290,448
"2008-09-26 04:51",31427,56,3659,2463,812,384
Most of the gains in the first day are from small improvements to code invoked by the test suite, rather than any new real features, though there are a few improvements there as well.
We still have a bit of work to do to successfully execute all the test files (and even more to pass all the tests.).
The most individual tests we ever logged passing with the converted version of the files was 3031; With the 700 or so passing tests in test files that don't run to completion yet (and therefore aren't in that listing above), we've already exceeded that. (I don't want to add those passes to the tracking file yet because they're harder to count if the test file doesn't pass. Plus it feels like cheating.)
Next to come is a document describing the failing test files, or (hopefully) the failing individual tests. This will be used to drive whatever tuits I have available, hopefully getting the biggest number of passing tests per tuit. This will also point anyone interested in contributing at some hopefully small effort that give us a concrete result.
- Not exactly a pristine copy: one of the core features of tcltest (where should I send my output?) requires some relatively advanced functionality - tcl's tests are not designed like perl6's to allow new implementations to ease into things. I've tacked on 2 replacement subs in our copy of tcltest that for now always say "just print to stdout/stderr". Still, that's two oneline procedures compared to the original 3375 lines of tcltest.tcl
- The additional processing consisted of loading our version of Test::More that had a stripped down version of [test]. It only let us run the very basic tests, though; now that we are running the native tests, we can now at least try to run some of the more complicated ones, which should help bring up our passing ratio.
- The 'spectest' target checks out a copy of the test directory from 8.5.4 tagged release of tcl's CVS repository, and then uses a small script to execute only those tests that we know run to completion.
- Adding an option to tcltest that generated TAP is something that could be done upstream in tcl itself, and would allow that project to integrate with any TAP based testing tool.
No comments:
Post a Comment