As far as I can tell from the information presented at this location: not
really, although there could be significant synergy. The difference is
that computer aided learning is still based on content developed by humans,
while intelligent searching should be handled completely automatically to
be useful.
I don't think it is possible to architect a semantic search engine(to resolve
complex queries you mentioned before) without human-in-loop content mark up.
For example, Sir Timbl's vision( semantic web) can be used for doing semantic
based keyword match-making. But, it requires us to mark-up in a AI-tinted
language like RDF. It is hard to come up with a pure algorithmic solution
(like page ranking) for semantic search/query.
On the other hand, the content to be searched must be tagged appropriately
first so the search engines know whether "bgp" means "border gateway
protocol" or "borders group inc" so maybe the difference isn't all that
huge.
yes. This kind of *intelligent* metadata tagging can be done using RDF.
But, who is going to semantically annotate the bulk of existing html data
pool?
I don't think AI is the (full) solution here, as some ambiguities are
simply not resolvable from the source information, even by reasonably
informed humans. For instance, there are seven people with the name
"robert rodriguez" in the movie industry alone.
Any kind of metadata based solution can resolve this context ambiguity.
Digital library folks have been doing this for years.
or how about
http://www.irtf.org/siren/draft-klensin-dns-search-05.txt ?