Tag Archives: artificial intelligence

Google wants AI to manage my relationships, and that might be a good thing

When Google said that not sharing photographs of your friends made you “kind of a terrible person” at this year’s I/O keynote, I bristled. The idea that its new Google Photos app would automatically suggest I share pictures with specific people sounded dystopian, especially because so much of the keynote seemed geared toward getting Google’s AI systems to help maintain relationships. Want to answer an email without even thinking about it? Inbox’s suggested responses are rolling out all over Gmail. Has a special moment with somebody slipped your mind? Google might organize photos from it into a book and suggest you have it printed.

 

Google is far from the first company to do this; Facebook suggests pictures to share and reminds you of friends’ birthdays all the time, for example. It’s easy to describe these features as creepy false intimacy, or say that they’re making us socially lazy, relieving us of the burden of paying attention to people. But the more I’ve thought about it, the more I’ve decided that I’m all right with an AI helping manage my connections with other people — because otherwise, a lot of those connections wouldn’t exist at all.

I don’t know if I’m a terrible person per se, but I may be the world’s worst relative. I have an extended network of aunts, uncles, cousins, and family friends that I would probably like but don’t know very well, and almost never see face-to-face. They’re the kind of relationships that some people I know maintain with family newsletters, emailed photos, and holiday cards. But I have never figured out how to handle any of these things.

vpavic_150517_1685_0088.0.0

Source:

https://www.theverge.com/2017/5/19/15660610/google-photos-ai-relationship-emotional-labor

Advertisements

Facebook’s new research tool is designed to create a truly conversational AI

Most of us talk to our computers on a semi-regular basis, but that doesn’t mean the conversation is any good. We ask Siri what the weather is like, or tell Alexa to put some music on, but we don’t expect sparkling repartee — voice interfaces right now are as sterile as the visual interface they’re supposed to replace. Facebook, though, is determined to change this: today it unveiled a new research tool that the company hopes will spur progress in the march to create truly conversational AI.

The tool is called ParlAI (pronounced like Captain Jack Sparrow asking to parley) and is described by the social media network as a “one-stop shop for dialog research.” It gives AI programmers a simple framework for training and testing chatbots, complete with access to datasets of sample dialogue, and a “seamless” pipeline to Amazon’s Mechanical Turk service. This latter is a crucial feature, as it means programmers can easily hire humans to interact with, test, and correct their chatbots.

Abigail See, a computer science PhD at Stanford University welcomed the news, saying frameworks like this were “very valuable” to scientists. “There’s a huge volume of AI research being produced right now, with new techniques, datasets and results announced every month,” said See in an email to The Verge. “Platforms [like ParlAI] offer a unified framework for researchers to easily develop, compare and replicate their experiments.”

In a group interview, Antoine Bordes from Facebook’s AI research lab FAIR said that ParlAI was designed to create a missing link in the world of chatbots. “Right now there are two types of dialogue systems,” explains Bordes. The first, he says, are those that “actually serve some purpose” and execute an action for the user (e.g., Siri and Alexa); while the second serves no purpose, but is actually entertaining to talk to (like Microsoft’s Tay — although, yes, that one didn’t turn out great).

 

“What we’re after with ParlAI, is more about having a machine where you can have multi-turn dialogue; where you can build up a dialogue and exchange ideas,” says Bordes. “ParlAI is trying to develop the capacity for chatbots to enter long-term conversation.” This, he says, will require memory on the bot’s part, as well as a good deal of external knowledge (provided via access to datasets like Wikipedia), and perhaps even an idea of how the user is feeling. “In that respect, the field is very preliminary and there is still a lot of work to do,” says Bordes.

It’s important to note that ParlAI isn’t a tool for just anyone. Unlike, say, Microsoft’s chatbot frameworks, this is a piece of kit that’s aimed at the cutting-edge AI research community, rather than developers trying to create a simple chatbot for their website. It’s not so much about building actual bots, but finding the best ways to train them in the first place. There’s no doubt, though, that this work will eventually filter through to Facebook’s own products (like its part-human-powered virtual assistant M) and to its chatbot platform for Messenger.

20151023_facebook_ai_pa_m_9.0.0.0

Source:

https://www.theverge.com/2017/5/15/15640886/facebook-parlai-chatbot-research-ai-chatbot

Google’s AI Invents Sounds Humans Have Never Heard Before

JESSE ENGEL IS playing an instrument that’s somewhere between a clavichord and a Hammond organ—18th-century classical crossed with 20th-century rhythm and blues. Then he drags a marker across his laptop screen. Suddenly, the instrument is somewhere else between a clavichord and a Hammond. Before, it was, say, 15 percent clavichord. Now it’s closer to 75 percent. Then he drags the marker back and forth as quickly as he can, careening though all the sounds between these two very different instruments.

“This is not like playing the two at the same time,” says one of Engel’s colleagues, Cinjon Resnick, from across the room. And that’s worth saying. The machine and its software aren’t layering the sounds of a clavichord atop those of a Hammond. They’re producing entirely new sounds using the mathematical characteristics of the notes that emerge from the two. And they can do this with about a thousand different instruments—from violins to balafons—creating countless new sounds from those we already have, thanks to artificial intelligence.

Engel and Resnick are part of Google Magenta—a small team of AI researchers inside the internet giant building computer systems that can make their own art—and this is their latest project. It’s called NSynth, and the team will publicly demonstrate the technology later this week at Moogfest, the annual art, music, and technology festival, held this year in Durham, North Carolina.

The idea is that NSynth, which Google first discussed in a blog post last month, will provide musicians with an entirely new range of tools for making music. Critic Marc Weidenbaum points out that the approach isn’t very far removed from what orchestral conductors have done for ages—“the blending of instruments is nothing new,” he says—but he also believes that Google’s technology could push this age-old practice into new places. “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead,” he says.

The Boundaries of Sound

Magenta is part of Google Brain, the company’s central AI lab, where a small army of researchers are exploring the limits of neural networks and other forms of machine learning. Neural networks are complex mathematical systems that can learn tasks by analyzing large amounts of data, and in recent years they’ve proven to be an enormously effective way of recognizing objects and faces in photos, identifying commands spoken into smartphones, and translating from one language to another, among other tasks. Now the Magenta team is turning this idea on its head, using neural networks as a way of teaching machines to make new kinds of music and other art.

shutterstock_142848064_SM

Source:

https://www.wired.com/2017/05/google-uses-ai-create-1000s-new-musical-instruments/

Google’s New AI Tool Turns Your Selfies Into Emoji

Machine learning and artificial intelligence have, for a couple years, been hailed as the death knell to almost everything you can imagine: The information we consume, the way we vote, the jobs we have, and even our very existence as a species. (Food for thought: The stuff about ML taking over Homo sapiens totally makes sense, even if you haven’t just taken a huge bong rip.) So maybe it’s welcome news that the newest application of ML from Google, worldwide leaders in machine learning, isn’t to build a new Mars rover or a chatbot that can replace your doctor. Rather, its a tool that anyone can use to generate custom emoji stickers of themselves.

person-with-phone

It lives inside of Allo, Google’s ML-driven chat app. Starting today, when you pull up the list of stickers you can use to respond to someone, there’s a simple little option: “Turn a selfie into stickers.” Tap, and it prompts you to take a selfie. Then, Google’s image-recognition algorithms analyze your face, mapping each of your features to those in a kit illustrated by Lamar Abrams, a storyboard artist, writer, and designer for the critically acclaimed Cartoon Network series Steven Universe. There are, of course, literally hundreds of eyes and noses and face shapes and hairstyles and glasses available. All told, Google thinks there are 563 quadrillion faces that the tool could generate. Once that initial caricature is generated, you can then make tweaks: Maybe change your hair, or give yourself different glasses. Then, the machine automatically generates 22 custom stickers of you.

The tool originated with an internal research project to see if ML could be used to generate an instant cartoon of someone, using just a selfie. But as Jason Cornwell, who leads UX for Google’s communication projects, points out, making a cartoon of someone isn’t much of an end goal. “How do you make something that doesn’t just convey what you look like but how you want to project yourself?” asks Cornwell. “That’s an interesting problem. It gets to ML and computer vision but also human expression. That’s where Jennifer came in. To provide art direction about how you might convey yourself.”

Cornwell is referring to Jennifer Daniel, the vibrant, well-known art director who first made her name for the zany, hyper-detailed infographics she created for Bloomberg Businessweek in the Richard Turley era, and then did a stint doing visual op-eds for the New York Times. As Daniel points out, “Illustrations let you bring emotional states in a way that selfies can’t.” Selfies are, by definition, idealizations of yourself. Emoji, by contrast, are distillations and exaggerations of how you feel. To that end, the emoji themselves are often hilarious: You can pick one of yourself as a slice of pizza, or a drooling zombie. “The goal isn’t accuracy,” explains Cornwell. “It’s to let someone create something that feels like themselves, to themselves.” So the user testing involved asking people to generate their own emoji and then asking questions such as: “Do you see yourself in this image? Would your friends recognize you?”

allostickers

Source:

https://www.fastcodesign.com/90124964/exclusive-new-google-tool-uses-ai-to-create-custom-emoji-of-you-from-a-selfie

Facebook created a faster, more accurate translation system using artificial intelligence

Facebook’s billion-plus users speak a plethora of languages, and right now, the social network supports translation of over 45 different tongues. That means that if you’re an English speaker confronted with German, or a French speaker seeing Spanish, you’ll see a link that says “See Translation.”

But Tuesday, Facebook announced that its machine learning experts have created a neural network that translates language up to nine times faster and more accurately than other current systems that use a standard method to translate text.

The scientists who developed the new system work at the social network’s FAIR group, which stands for Facebook A.I. Research.

“Neural networks are modeled after the human brain,” says Michael Auli, of FAIR, and a researcher behind the new system. One of the problems that a neural network can help solve is translating a sentence from one language to another, like French into English. This network could also be used to do tasks like summarize text, according to a blog item posted on Facebook about the research.

facebook-artificial-intelligence

 

But there are multiple types of neural networks. The standard approach so far has been to use recurrent neural networks to translate text, which look at one word at a time and then predict what the output word in the new language should be. It learns the sentence as it reads it. But the Facebook researchers tapped a different technique, called a convolutional neural network, or CNN, which looks at words in groups instead of one at a time.

“It doesn’t go left to right,” Auli says, of their translator. “[It can] look at the data all at the same time.” For example, a convolutional neural network translator can look at the first five words of a sentence, while at the same time considering the second through sixth words, meaning the system works in parallel with itself.

Graham Neubig, an assistant professor at Carnegie Mellon University’s Language Technologies Institute, researches natural language processing and machine translation. He says that this isn’t the first time this kind of neural network has been used to translate text, but that this seems to be the best he’s ever seen it executed with a convolutional neural network.

“What this Facebook paper has basically showed—it’s revisiting convolutional neural networks, but this time they’ve actually made it really work very well,” he says.

Facebook isn’t yet saying how it plans to integrate the new technology with its consumer-facing product yet; that’s more the purview of a department there call the applied machine learning group. But in the meantime, they’ve released the tech publicly as open-source, so other coders can benefit from it

That’s a point that pleases Neubig. “If it’s fast and accurate,” he says, “it’ll be a great additional contribution to the field.”

Source:

http://www.popsci.com/facebook-created-faster-more-accurate-translation-system-using-artificial-intelligence

This startup’s ‘software robots’ are taking the jobs of low-skilled office workers

The $30m raised last week by UiPath, which builds apps to automate repetitive office work, is the largest investment a Romanian startup has ever received.

Its tools are used by leading companies working in financial services, insurance, and healthcare, and each software robot license can replace up to five low-skilled full-time human employees, UiPath says.

The firm’s software robots mimic human users. Once installed on a computer and trained to perform certain tasks, they can read screens the way a human does and can perform a broad range of tasks, such as saving email attachments from clients, extracting data from a particular field in a bill, and importing that data into a company’s software, where it can be manipulated by a human employee.

A software robot could be trained to install Office copies on Windows machines, for example. It knows where and when to click next, and to check certain buttons. Of course, it still needs to wait for files to copy during certain steps of the installation process.

One of the unusual approaches that UiPath has adopted is that it offers its software free to companies with a turnover below $1m.

UiPath was founded in Romania in 2012, by former Microsoft software developer Daniel Dines, now CEO, and Marius Tirca, CTO.

It grew from 10 people employed two years ago, to 150 today. About 100 of them are still located in Bucharest, Romania, where the tech team is located. The company has physical offices in New York, London, Bangalore, Tokyo, and Singapore, and plans to set up shop in Hong Kong and Sydney.

UiPath’s turnover is undisclosed but the management says it increased sixfold in 2016, and most of the customers are US and European. CEO Dines said he’s working with two Top 10 Fortune Global companies, among others.

 

A competitor to Automation Anywhere and Blue Prism, UiPath says it will use the money raised in the series A round led by venture capital firm Accel Partners to expand the business and develop its technologies.

CTO Tirca said his tech team is working on adding more cognitive capabilities to the software, such as natural language processing and machine learning. Work is also going on to improve the way the robots handle unstructured data.

UiPath plans to double the team by the end of this year, tapping into Romania’s vibrant tech talent pool. The salaries it offers are among the highest in the country, but its technical job interviews are among the most difficult. The management wants to recruit the best and brightest, regardless of their experience in the field.

The robotic process automation market is expected to approach $9bn by 2024, according to Grand View Research. It reckons small and mid-size companies will benefit most from automation, as software robots are 65 percent less expensive than full-time employees. Forrester estimates that, by 2021, there will be over four million robots doing office, administrative, sales, and related tasks.

microsoftrobotics

Source:

http://www.zdnet.com/article/this-startups-software-robots-are-taking-the-jobs-of-low-skilled-office-workers/

The inventor of Siri says one day AI will be used to upload and access our memories

Artificial intelligence may one day surpass human intelligence. But, if designed right, it may also be used to enhance human cognition.

Tom Gruber, one of the inventors of the artificial intelligence voice interface Siri that now lives inside iPhones and the macOS operating system, shared a new idea at the TED 2017 conference today for using artificial intelligence to augment human memory.

“What if you could have a memory that was as good as computer memory and is about your life?” Gruber asked the audience. “What if you could remember every person you ever met? How to pronounce their name? Their family details? Their favorite sports? The last conversation you had with them?”

Gruber said he thinks that using artificial intelligence to catalog our experiences and to enhance our memory isn’t just a wild idea — it’s inevitable.

 

And the whole reason Gruber says it’s possible: Data about the media that we consume and the people we talk to is available because we use the internet and our smartphones to mediate our lives.

 

Privacy is no small consideration here. “We get to chose what is and is not recalled and retained,” said Gruber. “It’s absolutely essential that this be kept very secure.”

Though the idea of digitally storing our memories certainly raises a host of unsettling possibilities, Gruber says that AI memory enhancement could be a life-changing technology for those who suffer from Alzheimer’s or dementia.

The New York Times 2013 DealBook Conference in New York

 

Gruber isn’t the only one in Silicon Valley thinking of ways to get inside your head. Last week at the annual Facebook developer conference, Mark Zuckerberg shared a project Facebook is working on to build non-invasive sensors that will read brain activity. The sensors are being designed to read the part of your brain that translates thoughts to speech to allow you to type what you’re thinking.

And Elon Musk, CEO of Tesla and SpaceX, has started a new company called Neuralink to build wireless brain-computer interface technology. Musk shared his idea for the technology, which he calls “neural lace,” at Recode’s Code Conference last year.

Watch Musk discuss neural lace and why he thinks it could help humans keep apace with rapid advancements in artificial intelligence.

Source:

https://www.recode.net/2017/4/25/15424174/siri-apple-tom-gruber-ted-memories-artificial-intelligence

The smartphone is eventually going to die — this is Mark Zuckerberg’s crazy vision for what comes next

At this week’s Facebook F8 conference in San Jose, Mark Zuckerberg doubled down on his crazy ambitious 10-year plan for the company, first revealed in April 2016.

Basically, Zuckerberg’s uses this roadmap to demonstrate Facebook’s three-stage game plan in action: First, you take the time to develop a neat cutting-edge technology. Then you build a product based on it. Then you turn it into an ecosystem where developers and outside companies can use that technology to build their own businesses.

When Zuckerberg first announced this plan last year, it was big on vision, but short on specifics.

On Facebook’s planet of 2026, the entire world has internet access — with many people likely getting it through Internet.org, Facebook’s connectivity arm. Zuckerberg reiterated this week that the company is working on smart glasses that look like your normal everyday Warby Parkers. And underpinning all of this, Facebook is promising artificial intelligence good enough that we can talk to computers as easily as chatting with humans.

58fa606c7522ca38008b5661-1392

A world without screens

For science-fiction lovers, the world Facebook is starting to build is very cool and insanely ambitious. Instead of smartphones, tablets, TVs, or anything else with a screen, all our computing is projected straight into our eyes as we type with our brains.

A mixed-reality world is exciting for society and for Facebook shareholders. But it also opens the door to some crazy future scenarios, where Facebook, or some other tech company, intermediates everything you see, hear, and, maybe even, think. And as we ponder the implications of that kind of future, consider how fast we’ve already progressed on Zuckerberg’s timeline.

We’re now one year closer to Facebook’s vision for 2026. And things are slowly, but surely, starting to come together, as the social network’s plans for virtual and augmented reality, universal internet connectivity, and artificial intelligence start to slowly move from fantasy into reality.

In fact, Michael Abrash, the chief scientist of Facebook-owned Oculus Research, said this week that we could be just 5 years away from a point where augmented reality glasses become good enough to go mainstream. And Facebook is now developing technology that lets you “type” with your brain, meaning you’d type, point, and click by literally thinking at your smart glasses. Facebook is giving us a glimpse of this with the Camera Effects platform, making your phone into an AR device.

Fries with that?

The potential here is tremendous. Remember that Facebook’s mission is all about sharing, and this kind of virtual, ubiquitous ” teleportation ” and interaction is an immensely powerful means to that end.

This week, Oculus unveiled “Facebook Spaces,” a “social VR” app that lets denizens of virtual reality hang out with each other, even if some people are in the real world and some people have a headset strapped on. It’s slightly creepy, but it’s a sign of the way that Facebook sees you and your friends spending time together in the future. 

And if you’re wearing those glasses, there’s no guarantee that the person who’s taking your McDonald’s order is a human, after all. Imagine a virtual avatar sitting at the cash register, projected straight into your eyeballs, and taking your order. With Facebook announcing its plans to revamp its Messenger platform with AI features that also make it more business-friendly, the virtual fast-food cashier is not such a far-fetched scenario.

Sure, Facebook Messenger chatbots have struggled to gain widespread acceptance since they were introduced a year ago. But as demonstrated with Microsoft’s Xiaoice and even the Tay disaster, we’re inching towards more human-like systems that you can just talk to. And if Facebook’s crazy plan to let you “hear” with your skin plays out, they can talk to you while you’re wearing those glasses. And again, you’ll be able to reply with just a thought.

screenshot 2017-04-20 172747

Source:

http://www.businessinsider.com/facebook-f8-mark-zuckerberg-augmented-reality-2026-2017-4

Supercharge healthcare with artificial intelligence

Pattern-recognition algorithms can transform horses into zebras; winter scenes can become summer; artificial intelligence algorithms can generate art; robot radiologists can analyze your X-rays with remarkable precision.

We have reached the point where pattern-recognition algorithms and artificial intelligence (A.I.) are more accurate than humans at the visual diagnosis and observation of X-rays, stained breast cancer slides and other medical signs involving general correlations between normal and abnormal health patterns.

Before we run off and fire all the doctors, let’s better understand the A.I. landscape and the technology’s broad capabilities. A.I. won’t replace doctors — it will help to empower them and extend their reach, improving patient outcomes.

An evolution of machine learning

The challenge with artificial intelligence is that no single and agreed-upon definition exists. Nils Nilsson defined A.I. as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” But that definition isn’t close to describing how A.I. evolved.

Artificial intelligence began with the Turing Test, proposed in 1950 by Alan Turing, the scientist, cryptanalyst and theoretical biologist. Since then, rapid progress has been made over the last 75 years, advancing A.I. capabilities.

Isaac Asimov proposed the Three Laws of Robotics in 1950. The first A.I. program was coded in 1951. In 1959, MIT began research in the field of artificial intelligence. GM introduced the first robot into its production assembly line in 1961. The 1960s were transformative, with the first machine learning program written and the first demonstration of an A.I. program which understood natural language, and the first chatbot emerged. In the 1970s, the first autonomous vehicle was designed at the Stanford A.I. lab. Healthcare applications for A.I. were first introduced in 1974, along with an expert system for medical diagnostics. The LISP language emerged out of the 1980s, with natural networks integrating with autonomous vehicles. IBM’s famous Deep Blue beat Gary Kasparov at chess in 1997. And by 1999, the world was experimenting with A.I.-based “domesticated” robots.

Innovation was further inspired in 2004 when DARPA hosted the first design competition for autonomous vehicles in the commercial sector. By 2005, big tech companies, including IBM, Microsoft, Google and Facebook, were actively investing in commercial applications, and the first recommendation engines surfaced. The highlight of 2009 was Google’s first self-driving car, some three decades after the first autonomous vehicle was tested at Stanford.

The fascination of narrative science, for A.I. to write reports, was demonstrated in 2010, and IBM Watson was crowned a Jeopardy champion in 2011. Narrative science quickly evolved into personal assistants with the likes of Siri, Google, Now and Cortana. Elon Musk and others launched OpenAI, to discover and enact the path to safe artificial general intelligence in 2015 — to find a friendly A.I. In early 2016, Google’s DeepMind defeated legendary Go player Lee Se-dol in a historic victory.

Source:

http://www.cio.com/article/3191593/artificial-intelligence/supercharge-healthcare-with-artificial-intelligence.html