parsers need to canonicalize maps to any depth in order to
detect duplicates. This is "complex" by any definition of the word.
It isn't complex in terms of computational efficiency ... you can canonicalize
in O(N log N) and do it while reading.
And the consequence of not using structure-equality in duplicate detection is
complex.
I think CBOR should be clear about how it handles sharing and equality.
agree.
Larry
--
http://larry.masinter.net
(obscure footnote: Serializing structures with duplicate pointers is fun, you
need some notation to signify back pointers and to drop anchors. Using hash
tables and canonical values was part of the circlprint/hprint
http://pdp-10.trailing-edge.com/decuslib20-01/01/decus/20-0004/21lisp.tty.html
implementation.)