FAQSearchEmail

humanlevelartificialintelligence.com   

  
 Google vs Microsoft

Home | Videos | Contact Us   

 
Home
HLAI
UAI
Videos
Books
Patents
Notes
Donation

     
 

            

Google vs. Microsoft -- a fight for the future (2015)

 

There are primarily three things Google and Microsoft are fighting over.  1.  Universal artificial intelligence.  2.  Human robot.  3.  A perfect timeline of Earth, capable of recording every object, event and action for the past, present, and future, atom-by-atom and frame-by-frame. 

In 2006 I sent letters to Microsoft, Google, Apple, IBM, Ge, Honda, GM, Ford, Boeing and all major American companies telling them about universal artificial intelligence.  I explain in the letter that I can help them build a software that can do any human task.  Unfortunately, I was turned down by most  companies.  Some companies didn’t even bother to respond to my letter. 

After publishing my first book, called Human Level Artificial Intelligence, I was concerned that the technology companies would copy my work and build my invention.  This article was written in 2015.  Now that I look back at all the products and services by Google and Microsoft I can’t help but wonder if they infact copied my ideas or not.  

Let’s look at all the products from Google for the last 9 years. 

 

1.  3-d street view (2008)

2.  Visual search (2009)

3.  Instant search (2010)

4.  Sound search (2010)

5.  5 sense search engine (2011)

6.  Search engine with meaning (2010)

7.  Visual search on Android phones (2010)

8.  Search engine with QandA (2011)

9.  Google glasses (???)

10. Google timeline of past, present, and future (2012)

11.  Deep learning + reinforcement planning to play videogames (2014)

12.  Deep learning + reinforcement planning + AGI to play videogames (2015)

 

The most important technology from google is number 12. AGI stands for artificial general intelligence. By the way, the term artificial general intelligence is actually called human-level artificial intelligence. It provides the robot with the ability of common sense knowledge and insight to play videogames. in 2014 deepmind was bought by Google and the software can only play simple Atari games. Games like donkey kong or pacman are games it can't play because these games are goal driven. For example, in pacman the software will randomly take actions and find the best linaer actions to maximize points. However, the player doesn't get rewarded for collecting dots; it gets rewarded for eating enemies. Thus, the software assumes the goal is to eat enemies. This causes the software to never collect all the dots on the screen. In complex games like Zelda, the software will never be able to past the game. Google tried to solve this problem in 2015 by introducing AGI.

AGI stands for artificial general intelligence. Google wants to introduce human intelligence into the software so it can play goal driven videogames. In the case of pacman, the software will know based on human intelligence that the goal is to collect all the dots on the screen. AGI is also used to tell the software important information about the game like this is the player, these are the enemies, this is your life meter, your goal is to pass every level, losing a game is pain, etc.

Number 12 technology by google has striking similarities to the invention I filed with the US patent office back in 2006. Deep learning is just the beginning. If you look at my youtube videos on videogames, that's what google is trying to build with AGI. The term AGI should not be used because it's a recently coined term. The artificial intelligence community has been using human-level artificial intelligence for 40 years and I think that term should be used instead.

The intentions and goals of google in 2015 is to use this AGI on every single product they sell to the public, including their search engine, smartphones, computers, operating systems, google glasses, 3-d streetview, etc.

 

All products above are technologies both Google and Microsoft were/are competing over for the past 9 years.  Unfortunately for Microsoft, Google manages to build first and sell first.  I think one of the reasons is because Microsoft was following anti-trust guidelines from the government and Google was building products on the fly.  Google builds products and sells them immediately without testing or following government regulations (by-passing prototypes).

If you look at the products described above, it’s apparent to me what Google is trying to build.  Although Google’s main goal is to build a human robot, they can’t because the technology is very complicated.  Basically what they are trying to do is build individual parts of a human robot.  Along the way they can make products and services that they can sell to the public (the 10 products listed above).

In order to understand how the 10 products above relate to my human robot, I have to explain all products one by one.  In summary, Google used  ideas from my human robot and applied it to their search engine and smart phone technologies to make them smarter. 

 

3-d street view (2008)

The first thing I talked about in my books and patent applications is robot vision.  I talk about how the robot has to store images frame-by-frame in memory.  The images collected are organized in a 3-d environment.  The example used in my books was learning to store a 3-d representation of a city in the robot’s memory.  In order to learn what a city looks like, the robot has to walk around the city, in all corners and places, to know what each street looks like.  The robot has the option of walking or driving a car or flying a helicopter to see all parts of the city. 

After numerous encounters with streets and housing structures, the robot’s brain will contain a 3-d map of the city, frame-by-frame.  The robot forgets information and that is one of the techniques humans use to remember large amounts of images from the environment. 

Why is this 3-d map so important to the human robot?  According to my books, I stated that the robot uses this 3-d map of the environment for logic.  The example I used was answering location questions.  If a stranger approaches the robot and ask him where the closest library is, the robot will activate a map of the city from memory and use this information to answer the question.

I was furious when I heard that Google was unveiling 3-d streetview to the public in 2008 because it had such similarities with my description on robot vision.  For those that don’t know what 3-d streetview is, it is a website that people go to to view streets of cities, frame-by-frame.  Instead of a robot walking on the streets to collect street images, Google uses a 360 camera mounted on a car.

At this point, I told myself that 3-d streetview is just a small part of my invention and it would take a lot more copying before a human robot is created.  Was I in for a surprise of my life. 

 

Visual search (2009)

In my books and patent applications the second thing I talked about was visual search.  I was complaining about the search engines (2006) using only computer text to search for websites over the internet.  For example, in google you can only type in text on the search box to look for information on the web.  I proposed a  search engine that uses visual images to look for data over the internet. 

I describe how this visual search works.  First, the user takes an image and submits it to the search engine.  The search engine breaks up the images into objects using common sense knowledge (using human intelligence).  Next, it prioritizes the visual objects into a hierarchical tree based on  what the user is searching for.  Finally, the search engine translate these visual objects into words and uses these words to search for websites on the internet.

Google’s visual search is identical to the idea I proposed in my books and patent applications.  How this technology works is also identical. 

Google used the tennis example to show how their visual search works.  I use a very similar example.  The tennis example is a picture of a tennis match.  The search engine has to identify objects in the picture using common sense knowledge.  For example, how does the search engine know that the tennis ball is a tennis ball and not something similar, such as a lemon.  Humans can determine that the yellow ball in the picture is a tennis ball and not a lemon.      

At this point, I was concerned because if they are focusing on visual search then they will eventually move on to the other human senses, such as sound, taste, touch and smell. 

 

Instant search (2010)

In my books and patent applications I stated that a human robot senses information from the environment frame-by-frame.  This is why in my first patent claim I have a for-loop to represent the robot’s brain updating information from the environment, incrementally. 

I think someone from Google saw the for-loop from my patent applications and said, “why not apply that to a search engine”.  Instead of submitting one image for the visual search, why not submit a video; and in each frame, the search engine updates itself.  

 

Sound search (2010)

I was right about Google using human 5 senses as input for their search engine.  They unveiled visual search and now they are unveiling sound search.  This search engine is not exclusively for music.  Google is trying to input sound data in general.  I guess they want their search engine to have eyes as well as ears. 

At this point, I assume Google wants to build a 5 sense search engine.  They forgot taste, touch, and smell.  These human senses are important because it makes their search engine smarter. 

 

5 sense search engine (2011)

What did I tell you??  Google, in 2011 finally told the public that they are building a 5 sense search engine!!! 

What Google is trying to do is convert their search engine into a human robot.  Let’s say that the 5 sense search engine has instant search.  This means the search engine updates automatically every millisecond.   This search engine becomes my human robot at this point.  This 5 sense search engine serves as the foundation for my human robot.  The only thing missing is the robot’s conscious, the future prediction functions, and a physical human body.  As you can see later on, Google will add in more functions to their search engine and it‘s going to look even more similar to my human robot.

 

Search engine with meaning (2010)

In my books and patent applications I talk about the robot’s conscious and that the conscious provides knowledge for the robot to act intelligently.  One of the things the conscious can do is provide meaning to language.  For example, in Hamlet the sentence, “more matter and less art”, is hard to understand.  However, a human can use logic to understand what this sentence really means.  “more matter and less art” means “get to the point”.  An alternative to the sentence is “speak facts and stop being vague”, which means get to the point.  Using intelligence, we are able to understand the true meaning of the words in the sentence. 

Google’s search engine uses this method to generate the true meaning to what someone is typing in the search box.  The example used by Google for search engine with meaning is very similar to my Hamlet example. 

 

Search engine with Q and A (2011)

In my books and patent applications I talk about the robot’s conscious can do many intelligent things.  Some of these things include:  doing tasks, doing multiple simultaneous tasks, solving conflicts when doing multiple tasks, providing meaning to language, solving problems, generating common sense knowledge and so forth.  One thing the conscious can do is answer questions or provide meaningful answers to questions.

Google’s search engine with meaning provides natural language understanding for input text from users.  Google’s QandA engine gives users’ answers to questions.  Basically, Google’s search engine is including my robot’s conscious.  With the robot’s conscious you can build search engines that can do human task, like predict the future or solve problems or play videogames, etc.  In other words, the search engine can do more than just search for websites or answer questions.   

Google glasses is a significant technology that bares resemblance to my human robot.  The AI of the glasses serves as the robot’s conscious.  Whatever objects that the user is seeing is fed into the AI and the AI serves as the robot’s conscious.  It will tell the user important information about his environment, such as provide important info on visual objects the user is looking at, remind the user what he has to do, answer questions, provide translation for foreign words, give common sense knowledge etc,

Thus, Google glasses evolved from a simple search engine to a human robot.  All Google has to do at this point is give the AI (google glasses) a physical humanoid body. 

Even the storage of data for Google has been modified to the point where the way the search engine retrieve, store and modify information is based on a human brain (google semantic search 2012).

Universal artificial intelligence is one software program that can do any human task.  One example of universal artificial intelligence is a fully automated Mcdonalds.  When you walk into to a fully automated Mcdonalds, you won’t see a single human worker.  The cooks are robots, the manager is a robot, the janitors are robots and the delivery person is a robot.  These robots work together to run the restaurant.  The universal AI can be used to automate all restaurants, supermarkets, post offices, malls, factories, and so forth.  The AI in universal artificial intelligence is universal and can be applied to all businesses and human occupations   

Google spent 7 years trying to build my universal artificial intelligence.  In my patent applications and books, I talk about two ways to train a universal artificial intelligence.  One way is to use Google’s glasses to train the UAI.  Actually, in my patent application I stated that the human trainer has to wear a camera mounted on his forehead to get a first-person-point of view (2007).  The second way is to build a real human robot and let the human robot train the UAI.  Google’s glasses store frame-by-frame of what the user is looking at and hearing.  However, the perception of the human being can’t be stored.  The human wearing the glasses has to speak what he is thinking.  Also, the motor movements of the human being can’t be stored, so programmers have to build software to determine the movements of the human being (the trainer). 

Using Google’s glasses to train a universal artificial intelligence has it’s limits.  This is the method I suspect Google will be using to train industrial robots like janitors, nurses, garbage collectors, pilots, drivers, etc.  Although this method has its limits, the second way to train the UAI is much simple and more affective.  Fortunately for me, both methods have been proposed in my books or patent applications. 

At this point, I’m shocked at Google and how much ideas they took from me.  If you think this is the end, think again. 

 

Google timeline of past, present, and future (2012)

In 2012, Google unveiled timeline.  This is a software that tries to predict past, present and future events based on articles and info available online.  The software takes information from tweets, news articles, emails, radio, etc, to form a timeline of events happening in the past, present, and future.

My copyrights and patents filed in 2008, entitled practical time machine, basically describe a perfect timeline of Earth.  Super intelligent robots are required to gather information and to predict past, present, and future events.  Every single object, event and action are stored in a timeline, frame-by-frame. 

Watch google carefully in the future with their prediction technology.  They will say exactly what I said in my books.  Here’s what I said in my book:

“The AI tracks every atom, electron, and em radiation from Earth‘s past, present, and future and stores that information in a timeline”.        

“I‘m using this timeline to solve all cases from the FBI”

“I‘m using this timeline to disprove or prove all religions on planet Earth”

Since this timeline of Earth records all events for Earth’s past present and future, then events that happened thousands of years ago are known.  The timeline stores the entire life of Jesus, frame-by-frame and atom-by-atom.  This is how we can disprove or prove religion.  Also, the timeline can tell us who was the original author of Christianity, if we find out that Jesus was a made up character. 

I believe Google will use Google glasses and their electronic devices (smartphone or google car) to create a map of the current environment and store that information in a permanent timeline.  They have no intention of erasing any data from their database on people, places and things.  Private information about people will be collected every nanosecond and every action they take and what they do and where they are will be sent to google’s timeline.  By the way, patents were filed by google on this spying technology.  They also want to build a live-feed on 3-d street view, which in my personal opinion is clearly violating the constitution.  But the stupid government refuses to stop google.  They refuse to even set up a limited bill on internet privacy.   

Google is a tyrant and a dictator.  The government is corrupted and misguided.  If google wants to take away my rights to privacy and other people’s rights to privacy, they should do it the old fashion way!!!!! File an amendment with the supreme courts and let the judges decide. 

 

Conclusion

Google and Microsoft have been competing with each other for the past 6 years over my first invention, universal artificial intelligence.  The products listed above basically shows that there are similarities between google’s technology and my inventions.  I don’t believe in coincidences.  After google came out with visual search I was convinced they took my ideas.    

Remember one important point.  As soon as Google succeeds in commercially selling a universal artificial intelligence, the US unemployment rate goes up to 50 percent.  So, whatever google is building, everyone will be affected. 

Microsoft vs. Google vs. Darpa vs. IBM vs. Apple etc.  They are all building the same damn thing, which is a universal AI that can replace human workers.  The government agency, Darpa, is actively funding robots.  Although they haven’t specifically stated they were after the UAI, their goal is to build robots to replace human workers.

Corruption is the word to describe Congress.  On one hand, you got high unemployment rate, which is at 7.9 percent in 2013 and on the other hand, the government is giving money to Darpa to fund the robot challenge.  The purpose of the robot challenge is to build robots to replace janitors, rescue workers, nurses, drivers, farmers and so forth.  By the way, it’s much easier to build a robot soldier to kill people then to build a robot janitor to clean your house. 

Darpa also funded the grand challenge in 2004 to build an autonomous car.  Today, they are ready to commercially sell their autonomous cars.  Darpa works with Google and this is what google had to say, “by the year 2030, all cars on the streets and highways will be fully automated”.  What this statement translate to is “all human drivers will be out of work by 2030“.  The robot challenge 2012 is the next level, where Darpa wants to build human robots to replace popular human jobs like nurses and janitors. 

Darpa and the government keep saying that they are building these robots to rescue people in a natural disaster.  The point is, if these robots can do search/rescue tasks, then they can also do search and kill tasks.  As stated before, its 10 times harder to build a robot janitor than a robot soldier.  

Basically, Darpa and the government is interested in technology to put people out of work.  This contradicts what they stated to the American people, which is they want to put people back to work and lower the unemployment rate.  The only way to increase manufacturing jobs is to tell factories to stop buying automated machines from software companies.  No automated machines in factories means employers will have to hire humans to do the work.   

These idiots think that if they advance technology, businesses will automatically generate more jobs.  This was true back in 1990, but certainly not today.  The next big thing are software to replace human workers.  Even the computer scientists who builds these software are in danger of losing their jobs. 

A live-feed on 3-d street view, autonomous cars, and google glasses are spying tools by google.  The government knows this because google filed patents on their ambient environment spying technology.  The government knows that google wants to create a timeline to store all information.  And eventually this technology will take away all privacy from people, places and things  Despite this revelation, the government refuses to act on behalf of its citizens to protect our rights to privacy.  By law, the live-feed 3-d street view is illegal.      

I filed copyrights and patents on 8 inventions.  The universal artificial intelligence is my first invention.  Although I don’t have concrete evidence to state google is copying my work, I find it highly suspicious that their products and services for the last 6 years bear striking similarity to my universal AI.  And I know exactly what they are planning to build, not just for the immediate future, but for their distant future.       

 

1.  Universal artificial intelligence

2.  Human level artificial intelligence

3.  Psychic robot

4.  Super intelligent robots

5.  Ghost machines

6   atom manipulator

7.  Practical time machine

8.  AI time machine

 

 

<< notes          

 

Home | HLAI | UAI | Books | Patents | Notes | Donation

Copyright 2006 (All rights reserved)