FAQSearchEmail

humanlevelartificialintelligence.com   

  
 audio story

Home | Videos | Contact Us   

 
Home
HLAI
UAI
Videos
Books
Patents
Notes
Donation

     
 

             

Listening to audio story using human level artificial intelligence

 

     

Note:  To make this website free to the public please click on an ad to support my sponsors or you can make a tax-deductable donation using Paypal (click on the donation icon on the left).

 

The robot's conscious is the voice in the head that tells the robot information about the environment. In terms of listening to an audio story, the robot's conscious will fabricate a movie based on the audio words. This fabricated movie is a primitive sequence of images and flow charts to represent the meaning to words listened to.

This video is silent at the beginning because I wanted viewers to look at the thoughts of the robot while listening to the audio story. I slowed things down so that people can see the actual images and sound that is formed by the audio text. The second part of the video will contain audio sound to show viewers how the robot thinks at normal speed.

When a human robot is built, this is how it will think when it listens to audio words. The robot was taught by teachers in elementary school to understand meaning to words and sentences. When words are spoken, visual images (or 5 sense data) are activated and these activated thoughts are known as the meaning to the words spoken. words=movie and movie=words. For example, if the robot listens to the words: the cat jumped over the box, the movie sequence will be a cat jumping over a box. The movie activates the words and vice versa.

The learning of meaning to words come from a simple lesson. Teachers take a word and they give a picture to represent that word. The students will associate the word with the picture. In the future, when the student reads a word, a picture pops up in his head. This picture is the meaning to the word. When we deal with sentences, movie sequences activate in the mind and these movie sequences are constructed based on complex patterns. These complex patterns are formed in the robot's brain based on comparing similar examples.

My robot doesn't use: language parsers, grammar rules, semantic networks, relational graphs, bayesian's network, genetic programming, predicate calculus, rule-based systems, and common sense systems (such as CYC). My robot doesn't use machine learning, whereby programmers are required to input knowledge into the system. My robot learns all information from teachers in school.

This is important because when the robot answers questions from the audio story, he will be answering questions based on the fabricated movie and not the audio words. Some of the data in the fabricated movie contains logic and isn't generated by the meaning to a word/s. One more thing worth noting is that understanding natural language is a small part of human level artificial intelligence.

 

Home | HLAI | UAI | Books | Patents | Notes | Donation

Copyright 2006 (All rights reserved)