Sunday, October 13, 2019
Natural Language Processing :: essays research papers
 Natural Language Processing           There have been high hopes for Natural Language Processing. Natural  Language Processing, also known simply as NLP, is part of the broader field of  Artificial Intelligence, the effort towards making machines think. Computers may  appear intelligent as they crunch numbers and process information with blazing  speed. In truth, computers are nothing but dumb slaves who only understand on or  off and are limited to exact instructions. But since the invention of the  computer, scientists have been attempting to make computers not only appear  intelligent but be intelligent. A truly intelligent computer would not be  limited to rigid computer language commands, but instead be able to process and  understand the English language. This is the concept behind Natural Language  Processing.       The phases a message would go through during NLP would consist of  message, syntax, semantics, pragmatics, and intended meaning. (M. A. Fischer,  1987) Syntax is the grammatical structure. Semantics is the literal meaning.  Pragmatics is world knowledge, knowledge of the context, and a model of the  sender. When syntax, semantics, and pragmatics are applied, accurate Natural  Language Processing will exist.       Alan Turing predicted of NLP in 1950 (Daniel Crevier, 1994, page 9):       "I believe that in about fifty years' time it will be possible to  program computers .... to make them play the imitation game so well that an  average interrogator will not have more than 70 per cent chance of making the  right identification after five minutes of questioning."       But in 1950, the current computer technology was limited. Because of  these limitations, NLP programs of that day focused on exploiting the strengths  the computers did have. For example, a program called SYNTHEX tried to determine  the meaning of sentences by looking up each word in its encyclopedia. Another  early approach was Noam Chomsky's at MIT. He believed that language could be  analyzed without any reference to semantics or pragmatics, just by simply  looking at the syntax. Both of these techniques did not work. Scientists  realized that their Artificial Intelligence programs did not think like people  do and since people are much more intelligent than those programs they decided  to make their programs think more closely like a person would. So in the late  1950s, scientists shifted from trying to exploit the capabilities of computers  to trying to emulate the human brain. (Daniel Crevier, 1994)       Ross Quillian at Carnegie Mellon wanted to try to program the  associative aspects of human memory to create better NLP programs. (Daniel  Crevier, 1994) Quillian's idea was to determine the meaning of a word by the  words around it. For example, look at these sentences: After the strike, the    					    
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.