Tag Archives: artificial intelligence

IBM and MIT pen 10-year, $240M AI research partnership

IBM and MIT came together today to sign a 10-year, $240 million partnership agreement that establishes the MIT-IBM Watson AI Lab at the prestigious Cambridge, MA academic institution.

The lab will be co-chaired by Dario Gil, IBM Research VP of AI and Anantha P. Chandrakasan, dean of MIT’s School of Engineering.

Big Blue intends to invest $240 million into the lab where IBM researchers and MIT students and faculty will work side by side to conduct advanced AI research. As to what happens to the IP that the partnership produces, the sides were a bit murky about that.

This much we know: MIT plans to publish papers related to the research, while the two parties plan to open source a good part of the code. Some of the IP will end up inside IBM products and services. MIT hopes to generate some AI-based startups as part of the deal too.

“The core mission of joint lab is to bring together MIT scientists and IBM [researchers] to shape the future of AI and push the frontiers of science,” IBM’s Gil told TechCrunch.

To that end, the two parties plan to put out requests to IBM scientists and the MIT student community to submit ideas for joint research. To narrow the focus of what could be a broad endeavor, they have established a number of principles to guide the research.

 

This includes developing AI algorithms with goal of getting beyond specific applications for neural-based deep learning networks and finding more generalized ways to solve complex problems in the enterprise.

Secondly, they hope to harness the power of machine learning with quantum computing, an area that IBM is working hard to develop right now. There is tremendous potential for AI to drive the development of quantum computing and conversely for quantum computing and the computing power it brings to drive the development of AI.

With IBM’s Watson Security and Healthcare divisions located right down the street from MIT in Kendall Square, the two parties have agreed to concentrate on these two industry verticals in their work. Finally, the two teams plan to work together to help understand the social and economic impact of AI in society, which as we have seen has already proven to be considerable.

While this is a big deal for both MIT and IBM, Chandrakasan made clear that the lab is but one piece of a broader campus-wide AI initiative. Still, the two sides hope the new partnership will eventually yield a number of research and commercial breakthroughs that will lead to new businesses both inside IBM and in the Massachusetts startup community, particularly in the healthcare and cybersecurity areas.

Source:

IBM and MIT pen 10-year, $240M AI research partnership

Advertisements

Elon Musk Predicts The Cause Of World War III

Elon Musk has a prediction about the cause of World War III, and it’s not President Donald Trump and may not even involve humans at all.  

The head of Tesla and SpaceX on Monday shared a link on Twitter to a report about Russian President Vladimir Putin discussing artificial intelligence:

“Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin was quoted as saying. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

 

Musk added:

By comparison, Musk said, the saber-rattling from North Korea wasn’t much to worry about. 

One Twitter follower suggested that private companies, rather than governments, were far better at artificial intelligence. 

Musk replied:

He also apologized for the glum tweets, saying he was depressing himself, and promised: “Fun, exciting tweets coming soon!”

Source:

http://www.huffingtonpost.com/entry/elon-musk-world-war-iii_us_59ae3d24e4b0354e440c02a6

Putin says the country that perfects AI will be ‘ruler of the world’

Forget the arms race or space race — the new battle for technological dominance revolves around AI, according to Vladimir Putin. The Russian President told students at a career guidance forum that the “future belongs to artificial intelligence,” and whoever is first to dominate this category will be the “ruler of the world.” In other words, Russia fully intends to be a front runner in the AI space. It won’t necessarily hog its technology, though.

 

Putin maintains that he doesn’t want to see anyone “monopolize” the field, and that Russia would share its knowledge with the “entire world” in the same way it shares its nuclear tech. We’d take this claim with a grain of salt (we wouldn’t be surprised if Russia held security-related AI secrets close to the vest), but this does suggest that the country might share some of what it learns.

Not that this reassuring long-term AI skeptic Elon Musk. The entrepreneur believes that the national-level competition to lead AI will be the “most likely cause of WW3.” And it won’t even necessarily be the fault of overzealous leaders. Musk speculates that an AI could launch a preemptive strike if it decides that attacking first is the “most probable path to victory.” Hyperbolic? Maybe (you wouldn’t be the first to make that claim). It assumes that countries will put AI in charge of high-level decision making, Skynet-style, and that they might be willing to go to war over algorithms. Still, Putin’s remarks suggest that his concern has at least some grounding in reality — national pride is clearly at stake.

Source:

https://www.engadget.com/2017/09/04/putin-says-ai-leader-will-rule-the-world

Facebook AI learns human reactions after watching hours of Skype

There’s something not quite right about humanoid robots. They are cute up to a point, but once they become a bit too realistic, they often start to creep us out – a foible called the uncanny valley. Now Facebook wants robots to climb their way out of it.

Researchers at Facebook’s AI lab have developed an expressive bot, an animation controlled by an artificially intelligent algorithm. The algorithm was trained on hundreds of videos of Skype conversations, so that it could learn and then mimic how humans adjust their expressions in response to each other. In tests, it successfully passed as human-like.

To optimize its learning, the algorithm divided the human face into 68 key points that it monitored throughout each Skype conversation. People naturally produce nods, blinks and various mouth movements to show they are engaged with the person they are talking to, and eventually the system learned to do this too.

 

The bot was then able to look at a video of a human speaking, and choose in real time what the most appropriate facial response would be. If the person was laughing, for example, the bot might choose to open its mouth too, or tilt its head.

The Facebook team then tested the system with panels of people who watched animations that included both the bot reacting to a human, and a human reacting to a human. The volunteers judged the bot and the human to be equally natural and realistic.

However, as the animations were quite basic, it’s not clear whether a humanoid robot powered by this algorithm would have natural-seeming reactions.

Additionally, learning the basic rules of facial communication might not be enough to create truly realistic conversation partners, says Goren Gordon at Tel Aviv University in Israel. “Actual facial expressions are based on what you are thinking and feeling.”

Source:

https://www.newscientist.com/article/2146294-facebook-ai-learns-human-reactions-after-watching-hours-of-skype/

The world’s top artificial intelligence companies are pleading for a ban on killer robots

A revolution in warfare where killer robots, or autonomous weapons systems, are common in battlefields is about to start.

Both scientists and industry are worried.

The world’s top artificial intelligence (AI) and robotics companies have used a conference in Melbourne to collectively urge the United Nations to ban killer robots or lethal autonomous weapons.

An open letter by 116 founders of robotics and artificial intelligence companies from 26 countries was launched at the world’s biggest artificial intelligence conference, the International Joint Conference on Artificial Intelligence (IJCAI), as the UN delays meeting until later this year to discuss the robot arms race.

Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, released the letter at the opening of the opening of the conference, the world’s pre-eminent gathering of experts in artificial intelligence and robotics.

 

The letter is the first time that AI and robotics companies have taken a joint stand on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal talks on autonomous weapons. Of these, 19 have already called for a ban.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter says.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Signatories of the 2017 letter include:

  • Elon Musk, founder of Tesla, SpaceX and OpenAI (US)
  • Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
  • Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
  • Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
  • Jü rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
  • Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)

Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons.

The 2015 letter was signed by thousands of researchers working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” says Walsh.

“It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.”

Ryan Gariepy, founder of Clearpath Robotics, says the number of prominent companies and individuals who have signed this letter reinforces the warning that this is not a hypothetical scenario but a very real and pressing concern.

“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he says.

“The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”

Source:

http://www.businessinsider.com/top-artificial-intelligence-companies-plead-for-a-ban-on-killer-robots-2017-8

How AI, AR, and VR are making travel more convenient

From 50 ways to leave your lover, as the song goes, to 750 types of shampoos, we live in an endless sea of choices. And although I haven’t been in the market for hair products in a while, I understand the appeal of picking a product that’s just right for you, even if the decision-making is often agonizing. This quandary (the “Goldilocks Syndrome”, of finding the option that is “just right”) has now made its way to the travel industry, as the race is on to deliver highly personalized and contextual offers for your next flight, hotel room or car rental.

Technology, of course, is both a key driver and enabler of this brave new world of merchandising in the travel business. But this is not your garden variety relational-databases-and-object-oriented-systems tech. What is allowing airlines, hotels and other travel companies to behave more like modern-day retailers is the clever use of self-learning systems, heuristics trained by massive data sets and haptic-enabled video hardware. Machine learning (ML), artificial intelligence (AI), augmented reality (AR) and virtual reality (VR) are starting to dramatically shape the way we will seek and select our travel experiences.

Let every recommendation be right

AI is already starting to change how we search for and book travel. Recent innovation and investment has poured into front-end technologies that leverage machine learning to fine tune search results based on your explicit and implicit preferences. These range from algorithms that are constantly refining how options are ranked on your favorite travel website, to apps on your mobile phone that consider past trips, expressed sentiment (think thumbs up, likes/dislikes, reviews) and volunteered information like frequent traveler numbers.

Business travel, as well, is positioned for the application of AI techniques, even if not all advances are visible to the naked eye. You can take photos of a stack of receipts on your smartphone; optical character recognition software codifies expense amounts and currencies, while machine learning algorithms pick out nuances like categories and spending patterns.

AI is also improving efficiencies in many operational systems that form the backbone of travel. Machine learning is already starting to replace a lot of rule-based probabilistic models in airport systems to optimize flight landing paths to meet noise abatement guidelines, or change gate/ramp sequencing patterns to maximize fuel efficiency.

Making decisions based on reality

VR and AR are still changing and evolving rapidly, with many consumer technology giants publicly announcing products this year we can expect to see rapid early adoption and mainstreaming of these technologies. Just as music, photos, videos and messaging became ubiquitous thanks to embedded capabilities in our phones, future AR and VR applications are likely to become commonplace.

VR offers a rich, immersive experience for travel inspiration, and it is easy to imagine destination content being developed for a VR environment. But VR can also be applied to travel search and shopping. My company, Amadeus, recently demonstrated a seamless flight booking experience that includes seat selection and payment. Virtually “walking” onto an airplane and looking a specific seat you are about to purchase makes it easier for consumers to make informed decisions, while allowing airlines to clearly differentiate their premium offerings.

TechCrunch Disrupt SF 2015 - Day 1

AR will probably have a more immediate impact than VR, however, in part due to the presence of advanced camera, location and sensor technology already available today on higher-end smartphones. Airports are experimenting with beacon technology where an AR overlay would be able to easily and quickly guide you to your tight connection for an onward flight, or a tailored shopping or dining experience if you have a longer layover.

“Any sufficiently advanced technology is indistinguishable from magic,” goes Arthur C. Clarke’s famously quoted third law. But as we expect more authentic experiences: precise search results, an informed booking or an immersive travel adventure, we can count on increasingly magical technology from systems that learn to deliver us our “perfect bowl of porridge.”

Source:

https://venturebeat.com/2017/08/03/how-tech-is-making-travels-inconveniences-much-more-convenient/

Facebook shuts down AI system after it invents own language

In recent weeks, a story about experimental Facebook machine learning research has been circulating with increasingly panicky, Skynet-esque headlines.

“Facebook engineers panic, pull plug on AI after bots develop their own language,” one site wrote. “Facebook shuts down down AI after it invents its own creepy language,” another added. “Did we humans just create Frankenstein?” asked yet another. One British tabloid quoted a robotics professor saying the incident showed “the dangers of deferring to artificial intelligence” and “could be lethal” if similar tech was injected into military robots.

 

References to the coming robot revolution, killer droids, malicious AIs and human extermination abounded, some more or less serious than others. Continually quoted was this passage, in which two Facebook chat bots had learned to talk to each other in what is admittedly a pretty creepy way.

Bob: I can i i everything else

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

The reality is somewhat more prosaic. A few weeks ago, FastCo Design did report on a Facebook effort to develop a “generative adversarial network” for the purpose of developing negotiation software.

The two bots quoted in the above passage were designed, as explained in a Facebook Artificial Intelligence Research unit blog post in June, for the purpose of showing it is “possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.”

 

The bots were never doing anything more nefarious than discussing with each other how to split an array of given items (represented in the user interface as innocuous objects like books, hats, and balls) into a mutually agreeable split.

 

The intent was to develop a chatbot which could learn from human interaction to negotiate deals with an end user so fluently said user would not realize they are talking with a robot, which FAIR said was a success:

“The performance of FAIR’s best negotiation agent, which makes use of reinforcement learning and dialog rollouts, matched that of human negotiators … demonstrating that FAIR’s bots not only can speak English but also think intelligently about what to say.”

 

Source:

http://gizmodo.com/no-facebook-did-not-panic-and-shut-down-an-ai-program-1797414922

Meet Orii, the AI-powered smart Bluetooth ring that will make you feel like a secret agent

Bluetooth earpieces are passé. Origami Labs, a Hong Kong based startup has developed a smart solution for receiving smartphone alerts. Called Orii, the solution is a cross between a ring and a Bluetooth earpiece (to be worn on the finger instead of the ear). The futuristic-looking smart accessory uses bone conduction to aid in communication using just the fingertips and offers voice-control as a bonus.

Bone conduction is not a new technology and has been already explored by the hearing aid market. The audio technology transmits sound directly to the inner ear which also in a manner blocks ambient noise. When applied to the Orii smart ring, users can simply place the fingertip to their ear and discreetly talk over a call or respond to alerts.

orii-prototype

The Orii ring also comes in-built with support for voice controls. By saying “Orii”, followed by the command, a user can carry out a range of activities including translation, map routes, calendar, setting alerts, etc. In addition to that, the smart ring also supports both Google Assistant and Siri, making it compatible with both Android and iOS devices. In order to enable the two assistants, one needs to simply long-press the CapSense button on the smart ring to wake up either of the assistants.

Given the personal and discreet nature of the smart device, the thought of having all notifications buzz on the finger is quite a distracting idea. To tackle this, the companion Orii app allows a user to filter out which type of notifications should be sent to the ring and the LED indicators on the top will show the customized color accordingly.

Source:

http://www.bgr.in/news/meet-orii-the-ai-powered-smart-bluetooth-ring-that-will-make-you-feel-like-a-secret-agent/

Warehouse Picking Bots Could Soon Replace Expensive Humans

As the consumer demand for e-commerce grows, retailers are counting on robotics to lend a hand. Robotics companies and researchers are working on developing picking robots to select individual items and put them in boxes. It may sound like a small task, but on a scale of online retailer Amazon, the development looks set revolutionize one of the most labor-intensive aspects of e-commerce. Up to now, warehouse “picking” — the act of grabbing an item from a warehouse shelf — was largely done by humans. But that is changing, reports the The Wall Street Journal.

 

Per the WSJ:

Picking is the biggest labor cost in most e-commerce distribution centers, and among the least automated. Swapping in robots could cut the labor cost of fulfilling online orders by a fifth, said Marc Wulfraat, president of consulting firm MWPVL International Inc.

“When you’re talking about hundreds of millions of units, those numbers can be very significant,” he said. “It’s going to be a significant edge for whoever gets there first.”

So while robots have been able to grab items for a while, it’s proven impractical in warehouses that stock an ever-changing rotation of products. However, more sophisticated robotic arms are being developed that can recognize and respond with tactile differentiation different items, all while amassing data on their experiences to inform future picking practices. Essentially, these picking robots can learn from their collective memory.

JP-ROBOT-1-superJumbo

For retailers, this could be a game-changer. For some it means the possibility of fully automated warehouses, a “lights-out” scenario; a facility wouldn’t need overhead lamps because the robots wouldn’t. For others, it’s the ability to refine the jobs of human employees and solve a labor shortage that retailers have been dealing with as e-commerce has continued to expand. According to the U.S. Census Bureau, e-commerce revenues reached $390 billion in 2017 — twice as much as 2011.

Hudson’s Bay — a Canadian retail giant that also owns Saks Fifth Avenue — is currently testing a robot made by RightHand Robotics in its Ontario distribution center. The robot has an arm that can pick up different items and put them in various boxes. The company’s latest iteration of their gripper is pretty incredible; it has impressive tactile functions, various robotic finger options and even fingernails for grasping slim objects.

Amazon is eager to advance this technology through what is arguably one of the biggest new robotics competitions. The Amazon Robotics Challenge will be held from July 27 to 30 in Nagoya, Japan, and invites the academic robotic community to share research and pit their picking robots against each other in performance competitions.

The WSJ reports that over the last five years, U.S. warehouses have added 262,000 jobs to a sector that now employs 950,000 people. Whether these robots will have an effect on the jobs of laborers is yet to be seen.

Source:

https://www.inverse.com/article/34563-robotic-arms-could-soon-be-picking-out-your-amazon-orders

Apple launches machine learning research site

Apple just launched a blog focused on machine learning research papers and sharing the company’s findings. The Apple Machine Learning Journal is a bit empty right now as the company only shared one post about turning synthetic images into realistic ones in order to train neural networks.

This move is interesting as Apple doesn’t usually talk about their research projects. The company has contributed and launched some important open source projects, such as WebKit, the browser engine behind Safari, and Swift, Apple’s latest programming language for iOS, macOS, watchOS and tvOS. But a blog with research papers on artificial intelligence project is something new for Apple.

It’s interesting for a few reasons. First, this research paper has already been published on arXiv. Today’s version talks about the same things but the language is a bit more simple. Similarly, Apple has added GIFs to illustrate the results.

According to this paper, Apple has had to train its neural network to detect faces and other objects on photos. But instead of putting together huge libraries with hundreds of millions of sample photos to train this neural network, Apple has created synthetic images of computer-generated characters and applied a filter to make those synthetic images look real. It was cheaper and faster to train the neural network.

574efdcfdd0895d3558b46ed-1280

Second, Apple tells readers to email the company in its inaugural post. There’s also a big link in the footer to look at job openings at Apple. It’s clear that Apple plans to use this platform to find promising engineers in that field.

 

Third, many people have criticized Apple when it comes to machine learning, saying that companies like Google and Amazon are more competent. And it’s true that the company has been more quiet. Some consumer products like Google’s assistant and Amazon’s Alexa are also much better than Apple’s Siri.

But Apple has also been doing great work when it comes to analyzing your photo library on your device, the depth effect on the iPhone 7 Plus and the company’s work on augmented reality with ARkit. Apple wants to correct this narrative.

Source:

Apple launches machine learning research site

Advertisements
%d bloggers like this: