Version tested: 8.4.2
TotalValidator Pro is a stand-alone application written in Java. It is primarily written for Microsoft Windows, although it can be run on Linux, BSD, or other Unix-like systems. In such an environment, TotalValidator can be used in graphical mode, or executed from the shell. There are problems with either mode of operation:
% find . -type file | grep -E '(css|htm|html|svg|xht|xhtml)$' \ > file.htmlin conjunction with the following edit of the resulting file (assuming the vi editor, for example):
:%s/^/<a href="/g | %s/$/"><\/a>/gOf course, auto-generation is also necessary for any arbitrary collection of online resources, as long as there doesn't exist a document (or set of interlinked documents), which are all interlinked.
While it is evident that TotalValidator was not written with a Unix console in mind, it remains the only application which is useful for local debugging without the need to operate or duplicate resources on the remote server.
The primary mode of operation for TotalValidator is to generate HTML output, one file per resource, with errors in the code flagged via extra markup, with the local context preserved. This can produce an enormous number of files for larger testsuites. One alternative, particularly for the command line version, would be to either redirect errors to standard error (stderr) in the vein of compilers, or into standard output (stdout), either of which could be piped into a file or processed with standard tools. For debugging, a separate "abort on first error" mode would also be useful, optionally allowing to skip the failure interactively. Currently, processing a large number of files is not particularly efficient. The following are additional failures:
link
element, or through
@import
, either from the external resource linked
through the former method, or an inline style
element. Stand-alone CSS files are not processed,
and never reported on.% find . -name '*.css' > stylelistfollowed by the following from within the vi editor, within the
head
:
:r /path/to/stylelist :%s/^/<link rel='stylesheet' href='/g | %s/$/'>/g
TIMESTAMP com.totalvalidator.Validator main SEVERE: Validation Failed com.totalvalidator.exceptions.ValidationException: Error retrieving page: URI : 403 Forbidden at com.totalvalidator.utils.FileUtils.getReaderError (FileUtils.java:495) at com.totalvalidator.utils.FileUtils.getReader (FileUtils.java:349) at com.totalvalidator.Validator.validate (Validator.java:301) at com.totalvalidator.Validator.main (Validator.java:192)For clarity, in the example above the output was broken into lines, with the timestamp and URI in question edited out.
TIMESTAMP com.totalvalidator.Validator main SEVERE: Validation Failed com.totalvalidator.exceptions.ValidationException: Error retrieving page: URI : 404 Not Found at com.totalvalidator.utils.FileUtils.getReaderError (FileUtils.java:495) at com.totalvalidator.utils.FileUtils.getReader (FileUtils.java:349) at com.totalvalidator.Validator.validate (Validator.java:301) at com.totalvalidator.Validator.main (Validator.java:192)
In other words, TotalValidator silently fails to process valid resources, while it aborts when it shouldn't, instead of skipping the forbidden resources in question.
Traditionally, web addresses (or parts thereof) could only feature a subset of the ASCII character set. With the advent of internationalization, that changed. Resource locators (URLs) on all domains can contain Unicode characters anywhere in the string as long as it's not the domain name per se. The latter are treated differently, since top domains require registration with the relevant registrar or administrative body. While the problems of vetting and spoofing prevention are outside of the scope of this document, it should be noted that IRIs, or international resource identifiers, are standardized and have been supported in all browsers for many years. See RFC 3986 and RFC 3987 from 2005 for definitions, and the W3C resource repository for more information. Examples are:
As of this writing all addresses are real URLs. These characters need not be specially encoded in anchors, as long as the relevant encoding is set for the document, whether within the page itself, or preferably through a server header. TotalValidator throws an error whenever it encounters a non-ASCII character:
E622: The 'href' attribute does not have a valid value: It can only contain a limited set of US-ASCII characters (see rfc2396).
The referenced document is from 1998, and includes information on its being obsoleted by newer standards, quoted above. Documents like this are ubiquitous, with TotalValidator producing tons of false positives, especially outside the Anglosphere.
The most prodigious source of false positives in TotalValidator is due to its failure to check and process server headers, which are the most efficient way to declare encoding, language, content type, navigation structure of auxiliary documents, and even stylesheets. An example to achieve this is the Apache server configuration:
AddLanguage no .html
This can be overridden anywhere down in the URL hierarchy,
or from the top configuration file, for example through the
<Files>
directive. Server headers are trivially
checked from within the graphical browser — here examples from
Firefox/Pentadactyl:
The same information can be trivially obtained from the command line, using a text browser:
% lynx -head -dump http://unicus.no/ | grep Language Content-Language: no % lynx -head -dump http://unicus.no/en/ | grep Language Content-Language: en
TotalValidator does not check and process server headers, and thus will complain that the language is not set for the document, with the following error:
W864 [WCAG 2.0 3.1.1 (A)] Use the "lang' or 'xml:lang' attribute to denote the primary language of the document
That is poor advice even for small servers, let alone huge sites with
thousands of pages. Using the lang
attribute makes sense
for stand-alone documents (such as testcases), and even then only for
those which are shared through other means. Documents servers with a
valid server header should never fail validation.
In addition, it is worth keeping in mind that the html
element is optional. The algorithm in TotalValidator seems very
simplistic, as even assuming away the issues of server configuration,
some documents will fail, while others will pass, depending on whether
the root element is explicitly specified, of which more later.
Failure to check and parse server headers is a critical flaw in automated accessibility checkers:
The next source of false positives is very similar, a continuation of the theme. An efficient, and indeed in the case of XHTML the only correct way to define content type, is the server header. An example to achieve this is the Apache server configuration:
AddType 'application/xhtml+xml; charset=utf-8' .html
XHTML documents served as HTML (i.e. when the server is not properly configured) are harmful. TotalValidator does not check headers, and thus will use HTML validation for XHTML documents, even if a relevant option is added to the command line checker, or picked from the user interface drop-down during manual testing — as long as the extension is different from .xhtml. Web browsers again do not have any problem:
The same can be accomplished on the command line.
% lynx -head -dump http://aappilattoq.info/tools/hsl-svg/ | grep Content-Type Content-Type: application/xhtml+xml; charset=utf-8
XHTML documents may or may not feature a
DOCTYPE
. Indeed, the XHTML5 specification
recommends that it be omitted, since it is not necessary, as long as the
content type is properly set by the server. TotalValidator is not aware
of this, and does not check for server headers, resulting in valid
documents being parsed as HTML, throwing well-formedness
errors regarding self-closing elements.
The only way around this is to take the document offline, change the extension to .xhtml, and retry. This is not acceptable, particularly if documents are constructed on the fly, for example through server side includes.
Continuing the XML theme, some valid doctypes are unknown to TotalValidator. For example, a document with the following root element will always fail validation, regardless of what option is chosen from the available DTD ones via command line or through the user interface:
<html xmlns="http://www.w3.org/1999/xhtml" xmlns:svg="http://www.w3.org/2000/svg">
As elaborated in the previous section, for documents sent as an application, a doctype is spurious, and is best omitted. This is especially true in this case:
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1 plus MathML 2.0 plus SVG 1.1//EN" "http://www.w3.org/2002/04/xhtml-math-svg/xhtml-math-svg.dtd">
which produces the following in TotalValidator:
E010 Unrecognised or malformed <!DOCTYPE> tag. (if 'auto-detecting' then a best-fit <!DOCTYPE> will be used).
This false positive, in conjunction with the failure to process server headers and most online documents elaborated on in the previous section severely limits the software's usefulness with respect to XHTML 1.x and 5.x documents, even moderately complex XHTML/XML and SVG mixed documents, as well as MathML documents. While these documents are not nearly as ubiquitous as they were a decade ago due to the rise of the HTML5 specification, XML is still used widely enough to make this a big issue.
As mentioned before, TotalValidator does not check and process server
headers. Stylesheets can be added to documents via link
elements, but links need not reside within the documents themselves —
they can be set on the server. An example to achieve this is the Apache server
configuration:
<Files ~ "index\.(html)$"> Header add Link '</css/define.css>; rel="stylesheet"' </Files>
This directive injects a stylesheet link to all documents with the html extension. It may be used as a global stylesheet for the entire site, as a base stylesheet to be overridden by document-specific stylesheets, or as a way to neatly define CSS Variables, on its own a useful accessibility feature of cascading stylesheets. It is unfortunately not mentioned in the WCAG 2.0 standard, which it post-dates.
TotalValidator is unaware of such links. Non-sensical or otherwise erroneous headers sent by misconfigured server will also not be caught.
Other link types can also be specified on the server, such as the site map, copyright notice, newsfeed, and many more:
<Files ~ "index\.(html)$"> Header add Link '</index/>; rel=index' Header add Link '</legal/>; rel=copyright' Header add Link '</atom.xml>; rel=alternate; type=application/atom+xml; title="site updates newsfeed"' </Files>
These links affect accessibility in more ways than one. Some applications populate a separate navigation bar with these links, where they may appear as text and/or icon buttons. Others may add it to the text menu. Some more advanced search engine robots process server headers and may present information extracted thus in the site overview. It is also recommended that newsfeeds are appended to the documents through such a link.
TotalValidator is unaware of such links. Non-sensical or otherwise erroneous headers sent by misconfigured server will either be caught. Documents linking to newsfeeds within the body of the document, but neither through a head link nor a server header link, are not flagged.
HTML5 documents do not require
as much basic structure as HTML4 documents. For a minimal
document, all that is needed is the simplified doctype
and
a title. Even in the older specification, some basic structure was
optional. The following is a valid (albeit not very useful) document:
<!doctype html> <title>a simple document<title>
So are these:
<!doctype html> <title>a simple document<title> <link rel="stylesheet" href="/css/all.css"> <h1>roses are blue violets are red<h1>
<!doctype html> <title>a simple document<title> <link rel="stylesheet" href="/css/all.css"> <body> <h1>roses are blue violets are red<h1>
<!doctype html> <html> <title>a simple document<title> <link rel="stylesheet" href="/css/all.css"> <h1>roses are blue violets are red<h1>
<!doctype html> <title>a simple document<title> <style> html { color: white; background: black; } <style> <h1>roses are blue violets are red<h1>
And so on. TotalValidator is unfortunately living in the past. Such
documents will be flagged as invalid, as they do not have an explicitly
added root element, head element, and body element. In addition, the
nature of the bugs in TotalValidator becomes clear once the inline error
report shows a body tag inserted as a child of the
style
element.
E609 This tag or content is not allowed here. One of the following was expected: <base> <link> <meta> <noscript> <script> <style> <template>
Parser failures such as these and subsequent validation failures have a very high impact. They affect most testsuites written by or contributed to the standards bodies, and used by browser vendors for development — they affect the exponentially growing number of HTML5 sites. In fact, a modern website is very unlikely to pass validation with TotalValidator, while the number of mistakenly reported false positives will be a high multiple of the number of pages involved, hindering web development, and leading to accessibility being ignored due to too high a cost of wading through what could easily be tens or even thousands of bogus errors per site.
One more example concerns the language attribute, which was already covered in one of the previous sections. Let's assume that the language was correctly specified on the server. The following documents are then equivalent:
<!doctype html> <html> <title>a simple document<title> <h1>roses are blue violets are red<h1>
<!doctype html> <title>a simple document<title> <h1>roses are blue violets are red<h1>
They are both valid, and equally accessible. Due to the aforementioned failures and a simplistic algorithm, the first one will trigger a 'missing language definition" failure, while the second will not.
Another twist regarding basic document structure involves a combination of the above failure to recognize which elements are optional, and a simplistic, antediluvian algorithm for determining the content type, elaborated on in the previous sections. TotalValidator does not check and process server headers, and thus relies on the extension to determine the content type. As explained before, even explicit choices of the document type (DTD) will not be helpful in this regard.
If the document has an extension different than .htm or .html, say
.xht, .xhtml, then TotalValidator will require end tags for elements
such as html
or body
, even though the document
is not served as XML in any form.
Unfortunately, reality requires that sometimes developers choose (or are forced to) cater to users equipped with broken software, often shipped by a dominant vendor which cannot be ignored, especially by commercial entities. This story is as old as the web, and there needs to be a way of dealing with these issues. One of the widely used methods involves CSS hacks, which exploit known bugs and deficiencies in CSS parsers. An example used to target Internet Explorer:
@media screen and (min-width:0\0) { --- override rules here --- }
This is malformed media query, where the value is invalid, and should be discarded by the parser. A broken parser will incorrectly accept rules within the following block, thus enabling the developer to pass overrides only to that browser.
During parsing of a stylesheet with this rule, TotalValidator crashes
instead, throwing a Java exception (string index out of range
). A
crash is a fatal failure which cannot be recovered from. Therefore, if
there are any errors in the stylesheet, they will not be reported. It
should be also noted that the same intentional syntax error used as a
hack does not cause a crash if used outside the query, like so:
div { min-width: 0\0; }
Temporary removal of the hack is required for the stylesheet to be processed. It should be noted that the WCAG specification does not deal with CSS hacks at all. Older versions of Internet Explorer (IE) were targetable with conditional comments, but the newest version is not, while still struggling to implement web standards, such as Multicolumn Layout. CSS hacks are thus necessary in the real world, and a decision has to be made regarding how to handle accessibility validation failures resulting from their presence. Within the context of the Universal Design Law in Norway, it is recommended that such failures are ignored for the purpose of accessibility compliance.
TotalValidator does not parse compound selectors correctly. In particular, it is unable to deal with compound selectors featuring a pseudo-element as a component:
a, a::before, div {}
This selector is treated as a simple pseudo-element selector, where the pseudo-element cannot be followed by any other constituent selector, such as one in this obviously invalid example:
a::before div {}
A compound selector with simple or compound constituent selectors is a list of separate, unrelated selectors, and should be interpreted as one.
There is only one style language in the web. Older HTML specifications left room for alternative languages, but in HTML5, there is only CSS. Therefore, specifying content type via the type attribute is considered redundant clutter:
<!doctype html> <title>a simple document<title> <link rel="stylesheet" type="text/css" href="/css/all.css"> <style type="text/css"> html { color: white; background: black; } <style> <h1>roses are blue violets are red<h1>
In the case of link
elements, it is even doubly
redundant if the content type is specified through server
headers:
AddType 'text/css; charset=utf-8' .css
TotalValidator will create bogus report documents with false positives, failing documents due to lacking type attribute.
A similar source of useless failure reports (false positives) is the
insistence on style and script definitions being present in the
document. This can be globally fixed via server headers, via
Content-Style-Type
and Content-Script-Type
directives, but since TotalValidator doesn't parse and process server
headers, these additions are redundant. They are doubly redundant,
because there are no other languages that can be (or are) used on the
web, unlike when the web was in in its infancy. In particular, errors
regarding the presence of the style
attribute without a
corresponding style language definition in the document are particularly
harmful.
The pace of specification change has increased in the last five years, especially relative to the sleepy times of early 2000s, when CSS 2.1 was incrementally improved upon and non-dominant browser vendors were implementing this specification. Currently, new specifications are finalized rather quickly, and implemented even faster. It is not surprising then that automated checkers and validators are not quite catching up.
TotalValidator lacks or has incomplete support for the following specifications for which public testsuites exist:
touch-action
property:unresolved
pseudo-classimage-rendering
propertyIn CSS3, aliases for deprecated CSS2.1 and migrated (redefined) properties were specified, so that older pages should continue working. These aliases are not recognized by TotalValidator.
div { page-break-before: always; page-break-after: avoid; }
div { break-before: always; break-after: avoid; }
The former rule contains deprecated properties, while the latter rule contains their redefined, broader equivalents, Any conforming user agent should parse and apply both identically in the context for which the former is applicable. This behavior is mandated by the specification:
For compatibility with CSS Level 2, UAs that conform to [CSS21] must alias the ‘page-break-before’, ‘page-break-after’, and ‘page-break-inside’ properties to ‘break-before’, ‘break-after’, and ‘break-inside’ by treating the ‘page-break-*’ properties as shorthands for the ‘break-*’ properties
TotalValidator instead reports the deprecated properties as invalid.
The skewX()
value from the
CSS3 Transforms specification is incorrectly reported
as invalid:
div {transform: skewX(45deg);}
If a value of transition-property
from the
CSS3 Transitions specification is a property
which is defined as
animatable property:
div {transition-property: clip;}
it is incorrectly reported in TotalValidator as invalid:
E777 transition-property has a value "clip" that does not match what was expected: none | <single-transition-property> [ ',' <single-transition-property> ]*
TotalValidator fails to parse the optional fill
keyword
in the value of the border-image-slice
property from the
Backgrounds and Borders CSS3 specification, incorrectly
reporting it as invalid.
p {border-image-slice: 10;} div {border-image-slice: 10 fill;}
E777 border-image-slice has a value 'fill' that does not match what was expected: [<number> | <percentage>]{1,4} && fill?
Another instance where TotalValidator has failed to catch up is the
run-in
value of the display
property.
h2 {display: run-in;}
Originally part of CSS2, it was removed from CSS 2.1 due to poor browser support, and moved to CSS3, specifically the CSS3 Box module. TotalValidator simply reports run-in boxes as invalid.
TotalValidator issues bogus errors regarding comments, when it
encounters dashes within a
<p>some text <!-- test-test-test --> more text</p>
Such failures are just noise, and the likelihood of a commented-out block of text or code containing two or more non-consecutive dashes per line is high.
The summary
attribute is a sunset component in
HTML, since its contents are extractable only with
CSS, while the need for a more versatile mechanism was long
recognized. HTML5 introduces the summary
element, a child of the details
element. However, for
educational reasons the presence of a summary
attribute should not result
in a simple failure, but one accompanied by a recommendation/hint to
replace it either with the new element(s), and/or with a table caption,
the contents of which do not require stylesheet manipulation to be
presentable to both sighted and non-sighted users.
Upon encountering a table summary, TotalValidator throws an error, but presents inconsistent and even conflicting information. As an example, the code:
<table summary="descriptive enough text">
yields this:
1. E673 The 'summary' attribute is obsolete:
2. P879 - 1 instance(s): Provide either a <caption>, 'title' or 'summary' attributes to describe the table.
While the former is generated inline, at the relevant line in the
document, it contains a reference to the expanded notes at the bottom of
the report, which contains a directive, with its content taken directly
from the WCAG specification, which conflicts in this
respect with the HTML5 specification, which prohibits the
summary
attribute. Here, an error citing obsolescence of
the attribute leads to a recommendation to use the very same
attribute.
Furthermore, no mention is made of the new HTML5 elements. That stands in contrast to the actual support for these elements, as the following document will pass the TotalValidator checks:
<table> <caption> <details> <summary>description definition</summary> <p>long description content</p> </details> </caption>
In other words, the relevant parser support is there, but it is not effectively used.
Another problem is the inherent inability of automated checkers to associate descriptions surrounding objects such as tables and images with the latter. This is a notorious problem for figures, but in this case a long description may precede the table, marked up as a paragraph. Such context detection is beyond the capability of software, even though it is encouraged in accessibility guidelines.
When a descriptive enough caption element is used for the table, the document may still fail to pass an automated TotalValidator check, depending on the document type.
<table> <caption>sufficient description here</caption>
If the document is HTML, the document will pass. If it is the same document, still served as HTML, but with a different extension (for example XHTML viewed locally, or a document with an .xht extension, then the document will fail validation for no discernible reason.
If the cols
attribute is used with only asterisks:
<textarea cols="*,*">
then TotalValidator will quote an unrelated error:
E910 [WCAG 2.0 1.4.4 (AA)] Use relative, rather than absolute units
This error message is misleading. This attribute was commonly used on framesets, with this particular value used for equal distribution. TotalValidator should probably note obsolescence here instead.
TotalValidator has more confusing messages. For example, if it finds
an unquoted value of the src
attribute, it throws the
following error:
E009 Invalid characters found in the 'src' attribute's value starting at character: 8
By inspecting the relevant line, particularly the character quoted, it is not immediately clear what exactly is wrong, unless the document author has previously seen this error. The informational note at the bottom of the report actually refers to unquoted paths used as values for the attribute, which is what the error should have mentioned as well, instead of referring to a specific character down the path.
TotalValidator has no support for the srcset
attribute,
the source
element, the picture
element, and
the template element.
It is worth noting that the WCAG specification does not deal with newly introduced HTML5 elements and attributes at all, since its publication pre-dates the specification. What automated checkers can do is add parser support for these elements. Not all have yet been covered by the tested version of TotalValidator. Helpful hints and recommendations on how to replace/augment obsolete techniques will be of course missing.
The aria-expanded
attribute is used in separator
context. However, this bit:
<div role="separator" aria-expanded="false">
is marked as invalid in TotalValidator for reasons unknown:
E684 The 'aria-expanded' attribute is not allowed in this context
TotalValidator does not really have a concept of SVG.
One of its manifestations is that it treats vector graphics as external
objects within the XML realm. In HTML5,
SVG can be freely mixed within HTML documents,
inline, and not as an external resource, referenced for example by an
img
or an object
element. This is a major
problem, on several levels, particularly in light of the fact that
HTML parsers for SVG are under development in
modern browsers. In other words, vector graphics will no longer be
parsed as XML, unless the parent document of an inline is
served as application/xhtml+xml
, or the object is external,
also served as XML: image/svg+xml
. Since new
parsers are under development, that may change yet, and fast.
Still, even now islands of XML may be freely used,
namespace-free, within HTML documents. TotalValidator drops
the entire contents of the svg
element. Consider the
following example:
<div>this is <svg width="500" height="200"> <text x="10" y="10">sample text</text> </svg> </div>
It is perfectly valid, with text content accessible to humans, robots, graphical browsers, and also text browsers since time immemorial:
This screenshot taken from the elinks text browser in the Unix
console shows clearly that inline SVG is seamlessly mixed
with the surrounding non-vector HTML content, much in the
vein of image alternative text content. The text
element
within vector graphics is used to mark up text.
TotalValidator drops the entire contents of the svg
element, as if it didn't exist at all. This leads to a major
dysfunction, for example when vector graphics with text content are used
within anchors.
<a>this is <svg width="500" height="200"> <text x="10" y="10">all a link</text> </svg> </a>
Again, text/console browsers seamlessly inject text content of inline SVG, mixing it with the inline node of the parent anchor:
And yet, after TotalValidator has removed the entire contents of inline vector graphics, text content included, for the example used above it will throw an error of this type:
P871 [WCAG 2.0 1.1.1 (A)] Link text is missing
P871 - 1 instance(s): Describe the purpose of a link by providing descriptive text as the content of the <a> element. The description lets a user distinguish this link from other links in the Web page and helps the user determine whether to follow the link. The URI of the destination is generally not sufficiently descriptive. See http://www.w3.org/TR/WCAG20-TECHS/H30.html
In other words, this is a major source of false positives in TotalValidator, discouraging authors from using vector graphics.
External SVG documents are ignored by TotalValidator in their entirety. While accessibility of various constructs used solely within vector graphics is a potential area of research, parsing of such documents, handling of inline resources, and text fallback should be supported by any automated software at a minimum. TotalValidator will not report any potential errors in externally referenced SVG documents, whether referenced as links, objects, or images.
Due to no support for vector graphics, upon encountering style
properties of SVG, TotalValidator will report errors. This
applies to all vector-specific properties, such as fill
and
stroke
for example:
text { fill: white; stroke: black; stroke-width: 3; }
The result of which being:
E776 Unknown property: fill
E776 Unknown property: stroke
E776 Unknown property: stroke-width
Before the advent of web applications, phone "apps" and the like, forms were used to solicit input from the page consumer, and submit the gathered information in a structured fashion to the server for processing. Today this is no longer the only way user input is used. Increasingly, the remote server is not involved at all, with all user input processed by the web application on the fly. This has several implications:
input
elements, select
elements, and so on.form
elements can suddenly become spurious as
containers for input
elements.form
element attributes relating to the
submission process may not be used at all, because they serve no
purpose for self-contained web applications.fieldset
elements in theory cannot be used
outside of the form
element, but if there is none,
the need for grouping input controls may still be there for all
but the very simplest input-processing applications.label
.input
s serve no purpose, and may not be
used at all.Self-containing web applications are the fact of life, and their number is already huge, and will only increase, exponentially, with the rise of mobile computing. The issues highlighted above have important implications for accessibility, as forms as we know them will be broken. The WCAG specification does not address this issue at all.
This is also a big problem with respect to automated accessibility checking, for the following reasons:
form
container, which will generate
errors, and potentially dozens of them per application.form
container, resulting in another set of
errors.form
element is used as a wrapper, the
required attributes will not be present, generating errors, and
if they are present, will have to point to nowhere, or otherwise
be empty, or non-sensical (common methods were used in the past
to "fix" the alt
attribute accessibility
requirement.form
element is used as a wrapper, lack
of the submit input
will generate an error, but
since input is processed on the fly, addition of any such
element will confuse the user, since it will not execute the
form.In summary, there is no way to validate self-containing web applications, and all such applications will generate tons of errors. These errors cannot be fixed, and attempts to do so in order to "swat the problem away" will likely decrease accessibility of the application. TotalValidator follows the WCAG specification, which pre-dates the recent explosion of web apps, and is unsuitable for processing these applications.
Major issues with TotalValidator: