/[suikacvs]/markup/html/whatpm/t/tokenizer-result.txt
Suika

Contents of /markup/html/whatpm/t/tokenizer-result.txt

Parent Directory Parent Directory | Revision Log Revision Log


Revision 1.129 - (hide annotations) (download)
Sun Mar 2 14:32:27 2008 UTC (17 years, 4 months ago) by wakaba
Branch: MAIN
Changes since 1.128: +57 -8 lines
File MIME type: text/plain
++ whatpm/t/ChangeLog	2 Mar 2008 14:06:22 -0000
	* tokenizer-test-1.test: Tests for |<span ===>| is added (HTML5
	revision 1292).  Tests for & at the end of attribute value
	are added (HTML5 revision 1296).  Tests for bogus comments
	are added (HTML5 revision 1297).  Tests for |=| in
	unquoted attribute values are added (HTML5 revision 1299).
	Tests for single or double quotes in unquoted attribute
	values or attribute names and tests for missing spaces
	between attributes are added (HTML5 revision 1303).

2008-03-02  Wakaba  <wakaba@suika.fam.cx>

++ whatpm/Whatpm/ChangeLog	2 Mar 2008 14:05:38 -0000
	* HTML.pm.src: Raise a parse error for |<span ===>| (HTML5 revision
	1292).  Entities are not parsed in comment-like part in RCDATA
	elements (HTML5 revision 1294).  Allow bare & at the end
	of attribute value literals (HTML5 revision 1296).  More
	quirks mode doctypes (HTML5 revision 1302).  Requires spaces
	between attributes and ban attribute names or unquoted
	attribute values containing single or double quotes (HTML5
	revision 1303).

2008-03-02  Wakaba  <wakaba@suika.fam.cx>

1 wakaba 1.129 1..396
2 wakaba 1.1 # Running under perl version 5.008007 for linux
3 wakaba 1.129 # Current time local: Sun Mar 2 23:30:38 2008
4     # Current time GMT: Sun Mar 2 14:30:38 2008
5 wakaba 1.1 # Using Test.pm version 1.25
6 wakaba 1.11 # t/tokenizer/test1.test
7 wakaba 1.20 ok 1
8     ok 2
9     ok 3
10 wakaba 1.1 ok 4
11 wakaba 1.20 ok 5
12 wakaba 1.1 ok 6
13     ok 7
14     ok 8
15     ok 9
16     ok 10
17     ok 11
18     ok 12
19     ok 13
20     ok 14
21 wakaba 1.129 not ok 15
22     # Test 15 got: "$VAR1 = [\n qq'ParseError',\n [\n qq'StartTag',\n qq'h',\n {\n qq'c' => qq'd',\n qq'a' => qq'b'\n }\n ]\n ];\n" (t/HTML-tokenizer.t at line 158 fail #15)
23     # Expected: "$VAR1 = [\n [\n qq'StartTag',\n qq'h',\n {\n qq'c' => qq'd',\n qq'a' => qq'b'\n }\n ]\n ];\n" (Multiple atts no space: qq'<h a=\x{27}b\x{27}c=\x{27}d\x{27}>')
24     # Got 1 extra line at line 2:
25     # + " qq'ParseError',\n"
26     # t/HTML-tokenizer.t line 158 is: ok $parser_dump, $expected_dump,
27 wakaba 1.1 ok 16
28     ok 17
29     ok 18
30     ok 19
31     ok 20
32     ok 21
33 wakaba 1.25 ok 22
34     ok 23
35 wakaba 1.1 ok 24
36 wakaba 1.22 ok 25
37     ok 26
38     ok 27
39 wakaba 1.1 ok 28
40     ok 29
41     ok 30
42     ok 31
43     ok 32
44     ok 33
45 wakaba 1.18 ok 34
46 wakaba 1.1 ok 35
47     ok 36
48     ok 37
49 wakaba 1.8 ok 38
50 wakaba 1.28 ok 39
51     ok 40
52 wakaba 1.43 ok 41
53     ok 42
54 wakaba 1.11 # t/tokenizer/test2.test
55 wakaba 1.43 not ok 43
56 wakaba 1.48 # Test 43 got: "$VAR1 = [\n qq'ParseError',\n qq'ParseError',\n [\n qq'DOCTYPE',\n undef,\n undef,\n undef,\n 0\n ]\n ];\n" (t/HTML-tokenizer.t at line 158 fail #43)
57 wakaba 1.47 # Expected: "$VAR1 = [\n qq'ParseError',\n qq'ParseError',\n [\n qq'DOCTYPE',\n qq'',\n undef,\n undef,\n 0\n ]\n ];\n" (DOCTYPE without name: qq'<!DOCTYPE>')
58 wakaba 1.20 # Line 6 is changed:
59 wakaba 1.8 # - " qq'',\n"
60 wakaba 1.20 # + " undef,\n"
61     ok 44
62     ok 45
63     ok 46
64     ok 47
65     ok 48
66     ok 49
67     ok 50
68     ok 51
69 wakaba 1.97 ok 52
70     ok 53
71     ok 54
72     ok 55
73 wakaba 1.9 ok 56
74     ok 57
75 wakaba 1.1 ok 58
76     ok 59
77     ok 60
78 wakaba 1.19 ok 61
79 wakaba 1.1 ok 62
80     ok 63
81 wakaba 1.129 not ok 64
82     # Test 64 got: "$VAR1 = [\n [\n qq'StartTag',\n qq'h',\n {\n qq'a' => qq'&'\n }\n ]\n ];\n" (t/HTML-tokenizer.t at line 158 fail #64)
83     # Expected: "$VAR1 = [\n qq'ParseError',\n [\n qq'StartTag',\n qq'h',\n {\n qq'a' => qq'&'\n }\n ]\n ];\n" (Unescaped ampersand in attribute value: qq'<h a=\x{27}&\x{27}>')
84     # Line 2 is missing:
85     # - " qq'ParseError',\n"
86 wakaba 1.1 ok 65
87     ok 66
88     ok 67
89     ok 68
90     ok 69
91     ok 70
92 wakaba 1.34 ok 71
93     ok 72
94 wakaba 1.1 ok 73
95     ok 74
96 wakaba 1.21 ok 75
97     ok 76
98 wakaba 1.1 ok 77
99 wakaba 1.96 # t/tokenizer/test3.test
100 wakaba 1.1 ok 78
101     ok 79
102     ok 80
103 wakaba 1.34 ok 81
104 wakaba 1.15 ok 82
105 wakaba 1.1 ok 83
106     ok 84
107 wakaba 1.25 ok 85
108     ok 86
109 wakaba 1.34 ok 87
110 wakaba 1.1 ok 88
111     ok 89
112     ok 90
113     ok 91
114     ok 92
115     ok 93
116     ok 94
117 wakaba 1.8 ok 95
118     ok 96
119     ok 97
120     ok 98
121     ok 99
122     ok 100
123 wakaba 1.96 ok 101
124     ok 102
125     ok 103
126     ok 104
127     not ok 105
128     # Test 105 got: "$VAR1 = [\n qq'ParseError',\n [\n qq'DOCTYPE',\n undef,\n undef,\n undef,\n 0\n ]\n ];\n" (t/HTML-tokenizer.t at line 158 fail #105)
129 wakaba 1.47 # Expected: "$VAR1 = [\n qq'ParseError',\n [\n qq'DOCTYPE',\n qq'',\n undef,\n undef,\n 0\n ]\n ];\n" (<!doctype >: qq'<!doctype >')
130 wakaba 1.43 # Line 5 is changed:
131     # - " qq'',\n"
132     # + " undef,\n"
133 wakaba 1.96 not ok 106
134     # Test 106 got: "$VAR1 = [\n qq'ParseError',\n [\n qq'DOCTYPE',\n undef,\n undef,\n undef,\n 0\n ]\n ];\n" (t/HTML-tokenizer.t at line 158 fail #106)
135 wakaba 1.47 # Expected: "$VAR1 = [\n qq'ParseError',\n [\n qq'DOCTYPE',\n qq'',\n undef,\n undef,\n 0\n ]\n ];\n" (<!doctype : qq'<!doctype ')
136 wakaba 1.43 # Line 5 is changed:
137     # - " qq'',\n"
138     # + " undef,\n"
139 wakaba 1.8 ok 107
140     ok 108
141     ok 109
142     ok 110
143     ok 111
144     ok 112
145     ok 113
146 wakaba 1.10 ok 114
147     ok 115
148     ok 116
149     ok 117
150     ok 118
151     ok 119
152     ok 120
153     ok 121
154 wakaba 1.39 ok 122
155 wakaba 1.18 ok 123
156     ok 124
157     ok 125
158     ok 126
159 wakaba 1.20 ok 127
160     ok 128
161     ok 129
162     ok 130
163     ok 131
164     ok 132
165     ok 133
166     ok 134
167     ok 135
168     ok 136
169 wakaba 1.21 ok 137
170     ok 138
171 wakaba 1.20 ok 139
172     ok 140
173     ok 141
174 wakaba 1.28 ok 142
175 wakaba 1.20 ok 143
176     ok 144
177     ok 145
178     ok 146
179 wakaba 1.129 not ok 147
180     # Test 147 got: "$VAR1 = [\n qq'ParseError',\n qq'ParseError',\n qq'ParseError',\n [\n qq'StartTag',\n qq'z',\n {\n 0 => qq''\n }\n ]\n ];\n" (t/HTML-tokenizer.t at line 158 fail #147)
181     # Expected: "$VAR1 = [\n qq'ParseError',\n qq'ParseError',\n [\n qq'StartTag',\n qq'z',\n {\n 0 => qq''\n }\n ]\n ];\n" (<z/0='': qq'<z/0=\x{27}\x{27}')
182     # Got 1 extra line at line 4:
183     # + " qq'ParseError',\n"
184 wakaba 1.22 ok 148
185     ok 149
186     ok 150
187 wakaba 1.129 not ok 151
188     # Test 151 got: "$VAR1 = [\n qq'ParseError',\n qq'ParseError',\n qq'ParseError',\n [\n qq'StartTag',\n qq'z',\n {\n 0 => qq''\n }\n ]\n ];\n" (t/HTML-tokenizer.t at line 158 fail #151)
189     # Expected: "$VAR1 = [\n qq'ParseError',\n qq'ParseError',\n [\n qq'StartTag',\n qq'z',\n {\n 0 => qq''\n }\n ]\n ];\n" (<z/0="": qq'<z/0=""')
190     # Got 1 extra line at line 4:
191     # + " qq'ParseError',\n"
192 wakaba 1.22 ok 152
193     ok 153
194     ok 154
195     ok 155
196     ok 156
197 wakaba 1.28 ok 157
198     ok 158
199     ok 159
200     ok 160
201     ok 161
202     ok 162
203     ok 163
204     ok 164
205     ok 165
206     ok 166
207     ok 167
208     ok 168
209 wakaba 1.96 # t/tokenizer/test4.test
210 wakaba 1.28 ok 169
211     ok 170
212     ok 171
213     ok 172
214     ok 173
215     ok 174
216     ok 175
217     ok 176
218     ok 177
219     ok 178
220 wakaba 1.33 ok 179
221 wakaba 1.34 ok 180
222 wakaba 1.38 ok 181
223     ok 182
224 wakaba 1.43 ok 183
225     ok 184
226     ok 185
227     ok 186
228     ok 187
229     ok 188
230     ok 189
231     ok 190
232     ok 191
233     ok 192
234     ok 193
235     ok 194
236     ok 195
237     ok 196
238     ok 197
239 wakaba 1.96 ok 198
240     ok 199
241     ok 200
242     ok 201
243     not ok 202
244     # Test 202 got: "$VAR1 = [\n qq'ParseError',\n qq'ParseError',\n [\n qq'Comment',\n qq'doc'\n ],\n [\n qq'Character',\n qq'\\x{FFFD}'\n ]\n ];\n" (t/HTML-tokenizer.t at line 158 fail #202)
245 wakaba 1.47 # Expected: "$VAR1 = [\n qq'ParseError',\n [\n qq'Comment',\n qq'doc'\n ],\n qq'ParseError',\n [\n qq'Character',\n qq'\\x{FFFD}'\n ]\n ];\n" (U+0000 in lookahead region after non-matching character: qq'<!doc>\x{00}')
246 wakaba 1.43 # Got 1 extra line at line 3:
247     # + " qq'ParseError',\n"
248     # Line 8 is missing:
249     # - " qq'ParseError',\n"
250     ok 203
251     ok 204
252     ok 205
253     ok 206
254     ok 207
255     ok 208
256     ok 209
257     ok 210
258     ok 211
259     ok 212
260     ok 213
261     ok 214
262     ok 215
263     ok 216
264 wakaba 1.96 # t/tokenizer/contentModelFlags.test
265 wakaba 1.43 ok 217
266     ok 218
267     ok 219
268     ok 220
269     ok 221
270     ok 222
271     ok 223
272     ok 224
273     ok 225
274     ok 226
275     ok 227
276     ok 228
277     ok 229
278 wakaba 1.96 # t/tokenizer/escapeFlag.test
279 wakaba 1.43 ok 230
280     ok 231
281     ok 232
282     ok 233
283     ok 234
284     ok 235
285 wakaba 1.96 # t/tokenizer-test-1.test
286 wakaba 1.43 ok 236
287     ok 237
288     ok 238
289     ok 239
290     ok 240
291     ok 241
292     ok 242
293     ok 243
294     ok 244
295     ok 245
296     ok 246
297     ok 247
298     ok 248
299     ok 249
300     ok 250
301     ok 251
302     ok 252
303     ok 253
304     ok 254
305     ok 255
306     ok 256
307     ok 257
308     ok 258
309     ok 259
310     ok 260
311     ok 261
312     ok 262
313     ok 263
314     ok 264
315     ok 265
316     ok 266
317     ok 267
318     ok 268
319     ok 269
320     ok 270
321     ok 271
322     ok 272
323     ok 273
324     ok 274
325     ok 275
326     ok 276
327     ok 277
328     ok 278
329     ok 279
330     ok 280
331     ok 281
332     ok 282
333     ok 283
334     ok 284
335     ok 285
336     ok 286
337     ok 287
338     ok 288
339     ok 289
340     ok 290
341     ok 291
342     ok 292
343     ok 293
344     ok 294
345     ok 295
346     ok 296
347     ok 297
348     ok 298
349     ok 299
350     ok 300
351     ok 301
352     ok 302
353     ok 303
354     ok 304
355     ok 305
356     ok 306
357     ok 307
358     ok 308
359     ok 309
360     ok 310
361     ok 311
362     ok 312
363     ok 313
364     ok 314
365     ok 315
366     ok 316
367     ok 317
368     ok 318
369     ok 319
370     ok 320
371     ok 321
372     ok 322
373     ok 323
374     ok 324
375     ok 325
376     ok 326
377     ok 327
378     ok 328
379     ok 329
380     ok 330
381     ok 331
382     ok 332
383     ok 333
384     ok 334
385     ok 335
386     ok 336
387     ok 337
388 wakaba 1.59 ok 338
389     ok 339
390     ok 340
391     ok 341
392     ok 342
393     ok 343
394     ok 344
395     ok 345
396     ok 346
397     ok 347
398 wakaba 1.62 ok 348
399     ok 349
400     ok 350
401     ok 351
402     ok 352
403     ok 353
404     ok 354
405     ok 355
406     ok 356
407     ok 357
408     ok 358
409     ok 359
410 wakaba 1.96 ok 360
411     ok 361
412     ok 362
413     ok 363
414 wakaba 1.129 ok 364
415     ok 365
416     ok 366
417     ok 367
418     ok 368
419     ok 369
420     ok 370
421     ok 371
422     ok 372
423     ok 373
424     ok 374
425     ok 375
426     ok 376
427     ok 377
428     ok 378
429     ok 379
430     ok 380
431     ok 381
432     ok 382
433     ok 383
434     ok 384
435     ok 385
436     ok 386
437     ok 387
438     ok 388
439     ok 389
440     ok 390
441     ok 391
442     ok 392
443     ok 393
444     ok 394
445     ok 395
446     ok 396

admin@suikawiki.org
ViewVC Help
Powered by ViewVC 1.1.24