Tag Archives: artificial intelligence

Apple launches machine learning research site

Apple just launched a blog focused on machine learning research papers and sharing the company’s findings. The Apple Machine Learning Journal is a bit empty right now as the company only shared one post about turning synthetic images into realistic ones in order to train neural networks.

This move is interesting as Apple doesn’t usually talk about their research projects. The company has contributed and launched some important open source projects, such as WebKit, the browser engine behind Safari, and Swift, Apple’s latest programming language for iOS, macOS, watchOS and tvOS. But a blog with research papers on artificial intelligence project is something new for Apple.

It’s interesting for a few reasons. First, this research paper has already been published on arXiv. Today’s version talks about the same things but the language is a bit more simple. Similarly, Apple has added GIFs to illustrate the results.

According to this paper, Apple has had to train its neural network to detect faces and other objects on photos. But instead of putting together huge libraries with hundreds of millions of sample photos to train this neural network, Apple has created synthetic images of computer-generated characters and applied a filter to make those synthetic images look real. It was cheaper and faster to train the neural network.

574efdcfdd0895d3558b46ed-1280

Second, Apple tells readers to email the company in its inaugural post. There’s also a big link in the footer to look at job openings at Apple. It’s clear that Apple plans to use this platform to find promising engineers in that field.

 

Third, many people have criticized Apple when it comes to machine learning, saying that companies like Google and Amazon are more competent. And it’s true that the company has been more quiet. Some consumer products like Google’s assistant and Amazon’s Alexa are also much better than Apple’s Siri.

But Apple has also been doing great work when it comes to analyzing your photo library on your device, the depth effect on the iPhone 7 Plus and the company’s work on augmented reality with ARkit. Apple wants to correct this narrative.

Source:

Apple launches machine learning research site

Advertisements

Google’s DeepMind Turns to Canada for Artificial Intelligence Boost

Google’s high-profile artificial intelligence unit has a new Canadian outpost.

DeepMind, which Google bought in 2014 for roughly $650 million, said Wednesday that it would open a research center in Edmonton, Canada. The new research center, which will work closely with the University of Alberta, is the United Kingdom-based DeepMind’s first international AI research lab.

 

DeepMind, now a subsidiary of Google parent company Alphabet (GOOG, +1.49%), recruited three University of Alberta professors from to lead the new research lab. The professors—Rich Sutton, Michael Bowling, and Patrick Pilarski—will maintain their positions at the university while working at the new research office.

Sutton, in particular, is a noted expert in a subset of AI technologies called reinforcement learning and was an advisor to DeepMind in 2010. With reinforcement learning, computers look for the best possible way to achieve a particular goal, and learn from each time they fail.

DeepMind has popularized reinforcement learning in recent years through its AlphaGo program that has beat the world’s top players in the ancient Chinese board game, Go. Google has also incorporated some of the reinforcement learning techniques used by DeepMind in its data centers to discover the best calibrations that result in lower power consumption.

“DeepMind has taken this reinforcement learning approach right from the very beginning, and the University of Alberta is the world’s academic leader in reinforcement learning, so it’s very natural that we should work together,” Sutton said in a statement. “And as a bonus, we get to do it without moving.”

DeepMind has also been investigated by the United Kingdom’s Information Commissioner’s Office for failing to comply with the United Kingdom’s Data Protection Act as it expands to using its technology in the healthcare space.

ICO information commissioner Elizabeth Denham said in a statement on Monday that the office discovered a “number of shortcomings” in the way DeepMind handled patient data as part of a clinical trial to use its technology to alert, detect, and diagnosis kidney injuries. The ICO claims that DeepMind failed to explain to participants how it was using their medical data for the project.

DeepMind said Monday that it “underestimated the complexity” of the United Kingdom’s National Health Service “and of the rules around patient data, as well as the potential fears about a well-known tech company working in health.” DeepMind said it would be now be more open to the public, patients, and regulators with how it uses patient data.

“We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole,” DeepMind said in a statement. “We got that wrong, and we need to do better.”

Source:

http://fortune.com/2017/07/05/google-deepmind-artificial-intelligence-canada/

Microsoft’s next big Windows update will use AI to fight malware

Windows Fall Creators Update will come with a hefty serving of security upgrades, made timely by the increasingly rampant cyberattacks targeting the platform these days. In a blog post, Microsoft has revealed how the upcoming major update will level up Windows Defender Advanced Threat Protection, a Win 10 enterprise service that flags early signs of infection. According to CNET, Windows enterprise director Rob Lefferts said the upgrade will use data from Redmond’s cloud-based services to create an AI anti-virus that will make ATP much better at preventing cyberattacks.

One of the AI’s features is the ability to instantly pick up the presence of a previously unknown malware on a computer. Microsoft can then quickly quarantine the malware in the cloud and create a signature for its identity that can be used to protect other computers from it. Lefferts says about 96 percent of cyberattacks use new malware, so this feature sounds especially helpful. It could certainly change the way Microsoft rolls out defense measures, since it currently takes researchers hours to conjure one up. By the time they’re done, the malware might have already made its way to more computers.

While ATP’s new security features will initially only be available to enterprise customers, CNET says Microsoft has plans to roll them out to ordinary users. In addition, the company wants ATP to support “more platforms beyond Windows” and has begun working to make that happen. Microsoft will release Fall Creators’ preview between September and October, so these features (and more) will start hitting some businesses’ and companies’ PCs around that time.

Source:

https://www.engadget.com/2017/06/28/microsoft-windows-fall-creators-update-security/

Google’s AI Vision May No Longer Include Giant Robots

Good news for the deeply paranoid among us: If the apocalypse arrives via giant anthropomorphic robots, they probably won’t be bankrolled by Google. On Thursday, Google’s parent company, Alphabet, announced that it was selling Boston Dynamics, its premier robotics division, to the Japanese telco giant SoftBank for an undisclosed sum. The deal also includes a smaller robotics company called Schaft.

Boston Dynamics was less a moonshot than a sci-fi horror brought to life. Even before being acquired by Google in 2013, the 25-year-old company had already developed a Beast Wars–style squadron of robot predators with names like BigDog and WildCat, as well as a humanoid model called Atlas. The machines were often developed for the Pentagon under contracts with agencies such as the Defense Advanced Research Projects Agency. Google and the government both said the robots were being tested for disaster-relief scenarios, but that never stopped the stream of headlines describing them as “scary,” “nightmare-inducing,” or “evil.”

Whether Google’s ultimate plans were benign or nefarious, they never properly got off the ground. Both Boston Dynamics and Schaft were part of a months-long spending spree Google bankrolled to appease Andy Rubin, the creator of Android, who was looking to robots as his next frontier for innovation. But Rubin left Google in 2014, creating a leadership vacuum as the company struggled to get its various robotics acquisitions headquartered around the world to work in tandem. Under Rubin, Google reportedly had plans to launch a consumer robotics product by 2020, but that timeline seems in doubt now. (Alphabet still owns several smaller robotics startupsthat specialize in areas such as industrial manufacturing and film production.)

In the years since the Boston Dynamics acquisition, Google has shown that it doesn’t need to build a robot butler (or soldier) to create a future dominated by artificial intelligence. Machine-learning algorithms now guide most of the company’s products, whether recommending YouTube videos, identifying objects in users’ photo libraries, or whisking people around in driverless cars. The company is partnering with appliance manufacturers like General Electric so that people can control their ovens via voice commands to Google Home. And most ambitiously, at this year’s Google I/O, the company unveiled a suite of new products related to its machine-learning framework, TensorFlow. Developers will soon be able to make use of the same AI engines that power Google’s products to improve their own offerings via the company’s cloud-computing platform.

In the company’s ideal future, every human-machine interaction will be powered by Google, even if a specific app or appliance doesn’t have Google’s name on it. Terminator-style robots (OK, hopefully Jetsons-style) may one day be part of that vision, but the company can easily build an AI army with the products that fill our homes and garages today.

Source:

https://theringer.com/google-boston-dynamics-ai-robots-61a6a6c3bfec

Musk predicts AI will be better than humans at everything in 2030

In response to an article by New Scientist predicting that artificial intelligence will be able to beat humans at everything and anything by 2060, Elon Musk replied that he believed the milestone would be much sooner – around 2030 to 2040.

The New Scientist Study based its story from a survey of more than 350 AI researchers who believe there is a 50% chance that AI will outperform humans in all tasks within 45 years.

At a high level, the data is not shocking, but more of an interesting tidbit from the future. Dive into the details of when those very same AI experts believe machines will be better at specific tasks than humans and things get a little creepy. Experts believe they will be better at translating languages than humans by 2024 – something that is already being done on-the-fly by Google for webpages and for spoken word via Google Translate.

High school students everywhere will be outclassed by AI that is estimated to outperform them in essay writing by 2026. AI moves in to takeover truck driving by 2027 thought we believe this will happen much sooner based on the progress Tesla is making with autonomous driving. Tesla has a fully autonomous cross-country trip planned for later this year that, if successful, will pave the way for autonomous vehicle technology to go mainstream.

AI-eye-1200x628

The estimates get stranger with AI predicted to be able to write a bestselling book better than humans by 2049 and to perform extremely complex, dynamic surgery by 2053. All human jobs are expected to be automated within 120 years which is admittedly quite a bit farther out than 2060 but that is representative of the long tail of increasingly smaller tasks.

Elon is not all rainbows and sunshine with AI which is why he created the non-profit OpenAI organization. He co-founded the organization specifically to map out a path forward for AI research and development, and to ensure that AI is created in an intentional and safe manner.

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.

While the individual tasks or groups of tasks that comprise each automated industry from trucking to making tacos at your local taqueria, OpenAI is looking beyond that to the first Artificial General Intelligence. This is an intelligence that will have the ability to adapt dynamically to a situation, learn new tasks, creatively apply itself to the new conditions and to perform much like a human would. OpenAI believes that a dynamic AGI will far surpass the AI implemented in any specific industry and will be a game-changer in AI packing the power to change the world in ways we never imagined.

With that goal in mind, OpenAI is pushing the envelope in an attempt to define the cutting edge of AI and to thereby earn the right to define the future of AI for the world. As famed computer scientist Alan Kay once said, “The best way to predict the future is to invent it.”

Elon surely has his finger on the pulse of AI and believes that it is highly likely that it will have a massive impact on humanity. OpenAI carries this belief forward, stating that,

Artificial general intelligence (AGI) will be the most significant technology ever created by humans.

Though Elon is confident AI is moving forward at a far faster pace than scientists believe and is actively work to shape its future, he still fears the technology.

https://www.teslarati.com/musk-predicts-ai-will-better-humans-everything-2030/

Google wants AI to manage my relationships, and that might be a good thing

When Google said that not sharing photographs of your friends made you “kind of a terrible person” at this year’s I/O keynote, I bristled. The idea that its new Google Photos app would automatically suggest I share pictures with specific people sounded dystopian, especially because so much of the keynote seemed geared toward getting Google’s AI systems to help maintain relationships. Want to answer an email without even thinking about it? Inbox’s suggested responses are rolling out all over Gmail. Has a special moment with somebody slipped your mind? Google might organize photos from it into a book and suggest you have it printed.

 

Google is far from the first company to do this; Facebook suggests pictures to share and reminds you of friends’ birthdays all the time, for example. It’s easy to describe these features as creepy false intimacy, or say that they’re making us socially lazy, relieving us of the burden of paying attention to people. But the more I’ve thought about it, the more I’ve decided that I’m all right with an AI helping manage my connections with other people — because otherwise, a lot of those connections wouldn’t exist at all.

I don’t know if I’m a terrible person per se, but I may be the world’s worst relative. I have an extended network of aunts, uncles, cousins, and family friends that I would probably like but don’t know very well, and almost never see face-to-face. They’re the kind of relationships that some people I know maintain with family newsletters, emailed photos, and holiday cards. But I have never figured out how to handle any of these things.

vpavic_150517_1685_0088.0.0

Source:

https://www.theverge.com/2017/5/19/15660610/google-photos-ai-relationship-emotional-labor

Facebook’s new research tool is designed to create a truly conversational AI

Most of us talk to our computers on a semi-regular basis, but that doesn’t mean the conversation is any good. We ask Siri what the weather is like, or tell Alexa to put some music on, but we don’t expect sparkling repartee — voice interfaces right now are as sterile as the visual interface they’re supposed to replace. Facebook, though, is determined to change this: today it unveiled a new research tool that the company hopes will spur progress in the march to create truly conversational AI.

The tool is called ParlAI (pronounced like Captain Jack Sparrow asking to parley) and is described by the social media network as a “one-stop shop for dialog research.” It gives AI programmers a simple framework for training and testing chatbots, complete with access to datasets of sample dialogue, and a “seamless” pipeline to Amazon’s Mechanical Turk service. This latter is a crucial feature, as it means programmers can easily hire humans to interact with, test, and correct their chatbots.

Abigail See, a computer science PhD at Stanford University welcomed the news, saying frameworks like this were “very valuable” to scientists. “There’s a huge volume of AI research being produced right now, with new techniques, datasets and results announced every month,” said See in an email to The Verge. “Platforms [like ParlAI] offer a unified framework for researchers to easily develop, compare and replicate their experiments.”

In a group interview, Antoine Bordes from Facebook’s AI research lab FAIR said that ParlAI was designed to create a missing link in the world of chatbots. “Right now there are two types of dialogue systems,” explains Bordes. The first, he says, are those that “actually serve some purpose” and execute an action for the user (e.g., Siri and Alexa); while the second serves no purpose, but is actually entertaining to talk to (like Microsoft’s Tay — although, yes, that one didn’t turn out great).

 

“What we’re after with ParlAI, is more about having a machine where you can have multi-turn dialogue; where you can build up a dialogue and exchange ideas,” says Bordes. “ParlAI is trying to develop the capacity for chatbots to enter long-term conversation.” This, he says, will require memory on the bot’s part, as well as a good deal of external knowledge (provided via access to datasets like Wikipedia), and perhaps even an idea of how the user is feeling. “In that respect, the field is very preliminary and there is still a lot of work to do,” says Bordes.

It’s important to note that ParlAI isn’t a tool for just anyone. Unlike, say, Microsoft’s chatbot frameworks, this is a piece of kit that’s aimed at the cutting-edge AI research community, rather than developers trying to create a simple chatbot for their website. It’s not so much about building actual bots, but finding the best ways to train them in the first place. There’s no doubt, though, that this work will eventually filter through to Facebook’s own products (like its part-human-powered virtual assistant M) and to its chatbot platform for Messenger.

20151023_facebook_ai_pa_m_9.0.0.0

Source:

https://www.theverge.com/2017/5/15/15640886/facebook-parlai-chatbot-research-ai-chatbot

Google’s AI Invents Sounds Humans Have Never Heard Before

JESSE ENGEL IS playing an instrument that’s somewhere between a clavichord and a Hammond organ—18th-century classical crossed with 20th-century rhythm and blues. Then he drags a marker across his laptop screen. Suddenly, the instrument is somewhere else between a clavichord and a Hammond. Before, it was, say, 15 percent clavichord. Now it’s closer to 75 percent. Then he drags the marker back and forth as quickly as he can, careening though all the sounds between these two very different instruments.

“This is not like playing the two at the same time,” says one of Engel’s colleagues, Cinjon Resnick, from across the room. And that’s worth saying. The machine and its software aren’t layering the sounds of a clavichord atop those of a Hammond. They’re producing entirely new sounds using the mathematical characteristics of the notes that emerge from the two. And they can do this with about a thousand different instruments—from violins to balafons—creating countless new sounds from those we already have, thanks to artificial intelligence.

Engel and Resnick are part of Google Magenta—a small team of AI researchers inside the internet giant building computer systems that can make their own art—and this is their latest project. It’s called NSynth, and the team will publicly demonstrate the technology later this week at Moogfest, the annual art, music, and technology festival, held this year in Durham, North Carolina.

The idea is that NSynth, which Google first discussed in a blog post last month, will provide musicians with an entirely new range of tools for making music. Critic Marc Weidenbaum points out that the approach isn’t very far removed from what orchestral conductors have done for ages—“the blending of instruments is nothing new,” he says—but he also believes that Google’s technology could push this age-old practice into new places. “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead,” he says.

The Boundaries of Sound

Magenta is part of Google Brain, the company’s central AI lab, where a small army of researchers are exploring the limits of neural networks and other forms of machine learning. Neural networks are complex mathematical systems that can learn tasks by analyzing large amounts of data, and in recent years they’ve proven to be an enormously effective way of recognizing objects and faces in photos, identifying commands spoken into smartphones, and translating from one language to another, among other tasks. Now the Magenta team is turning this idea on its head, using neural networks as a way of teaching machines to make new kinds of music and other art.

shutterstock_142848064_SM

Source:

https://www.wired.com/2017/05/google-uses-ai-create-1000s-new-musical-instruments/

Google’s New AI Tool Turns Your Selfies Into Emoji

Machine learning and artificial intelligence have, for a couple years, been hailed as the death knell to almost everything you can imagine: The information we consume, the way we vote, the jobs we have, and even our very existence as a species. (Food for thought: The stuff about ML taking over Homo sapiens totally makes sense, even if you haven’t just taken a huge bong rip.) So maybe it’s welcome news that the newest application of ML from Google, worldwide leaders in machine learning, isn’t to build a new Mars rover or a chatbot that can replace your doctor. Rather, its a tool that anyone can use to generate custom emoji stickers of themselves.

person-with-phone

It lives inside of Allo, Google’s ML-driven chat app. Starting today, when you pull up the list of stickers you can use to respond to someone, there’s a simple little option: “Turn a selfie into stickers.” Tap, and it prompts you to take a selfie. Then, Google’s image-recognition algorithms analyze your face, mapping each of your features to those in a kit illustrated by Lamar Abrams, a storyboard artist, writer, and designer for the critically acclaimed Cartoon Network series Steven Universe. There are, of course, literally hundreds of eyes and noses and face shapes and hairstyles and glasses available. All told, Google thinks there are 563 quadrillion faces that the tool could generate. Once that initial caricature is generated, you can then make tweaks: Maybe change your hair, or give yourself different glasses. Then, the machine automatically generates 22 custom stickers of you.

The tool originated with an internal research project to see if ML could be used to generate an instant cartoon of someone, using just a selfie. But as Jason Cornwell, who leads UX for Google’s communication projects, points out, making a cartoon of someone isn’t much of an end goal. “How do you make something that doesn’t just convey what you look like but how you want to project yourself?” asks Cornwell. “That’s an interesting problem. It gets to ML and computer vision but also human expression. That’s where Jennifer came in. To provide art direction about how you might convey yourself.”

Cornwell is referring to Jennifer Daniel, the vibrant, well-known art director who first made her name for the zany, hyper-detailed infographics she created for Bloomberg Businessweek in the Richard Turley era, and then did a stint doing visual op-eds for the New York Times. As Daniel points out, “Illustrations let you bring emotional states in a way that selfies can’t.” Selfies are, by definition, idealizations of yourself. Emoji, by contrast, are distillations and exaggerations of how you feel. To that end, the emoji themselves are often hilarious: You can pick one of yourself as a slice of pizza, or a drooling zombie. “The goal isn’t accuracy,” explains Cornwell. “It’s to let someone create something that feels like themselves, to themselves.” So the user testing involved asking people to generate their own emoji and then asking questions such as: “Do you see yourself in this image? Would your friends recognize you?”

allostickers

Source:

https://www.fastcodesign.com/90124964/exclusive-new-google-tool-uses-ai-to-create-custom-emoji-of-you-from-a-selfie

Facebook created a faster, more accurate translation system using artificial intelligence

Facebook’s billion-plus users speak a plethora of languages, and right now, the social network supports translation of over 45 different tongues. That means that if you’re an English speaker confronted with German, or a French speaker seeing Spanish, you’ll see a link that says “See Translation.”

But Tuesday, Facebook announced that its machine learning experts have created a neural network that translates language up to nine times faster and more accurately than other current systems that use a standard method to translate text.

The scientists who developed the new system work at the social network’s FAIR group, which stands for Facebook A.I. Research.

“Neural networks are modeled after the human brain,” says Michael Auli, of FAIR, and a researcher behind the new system. One of the problems that a neural network can help solve is translating a sentence from one language to another, like French into English. This network could also be used to do tasks like summarize text, according to a blog item posted on Facebook about the research.

facebook-artificial-intelligence

 

But there are multiple types of neural networks. The standard approach so far has been to use recurrent neural networks to translate text, which look at one word at a time and then predict what the output word in the new language should be. It learns the sentence as it reads it. But the Facebook researchers tapped a different technique, called a convolutional neural network, or CNN, which looks at words in groups instead of one at a time.

“It doesn’t go left to right,” Auli says, of their translator. “[It can] look at the data all at the same time.” For example, a convolutional neural network translator can look at the first five words of a sentence, while at the same time considering the second through sixth words, meaning the system works in parallel with itself.

Graham Neubig, an assistant professor at Carnegie Mellon University’s Language Technologies Institute, researches natural language processing and machine translation. He says that this isn’t the first time this kind of neural network has been used to translate text, but that this seems to be the best he’s ever seen it executed with a convolutional neural network.

“What this Facebook paper has basically showed—it’s revisiting convolutional neural networks, but this time they’ve actually made it really work very well,” he says.

Facebook isn’t yet saying how it plans to integrate the new technology with its consumer-facing product yet; that’s more the purview of a department there call the applied machine learning group. But in the meantime, they’ve released the tech publicly as open-source, so other coders can benefit from it

That’s a point that pleases Neubig. “If it’s fast and accurate,” he says, “it’ll be a great additional contribution to the field.”

Source:

http://www.popsci.com/facebook-created-faster-more-accurate-translation-system-using-artificial-intelligence

Advertisements
%d bloggers like this: