Tag Archives: artificial intelligence

Artificial Intelligence and Automation

When asked how he tells his kids to prepare for the future of working with artificial intelligence, Peter Norvig said, “I tell them… Wherever they will be working in 20 years probably doesn’t exist now. No sense training for it today. Be flexible,” he said, “and have the ability to learn new things. 

Future of work experts and AI scientists believe that in the future there will be less full-time traditional jobs that require a single skill set, less routine administrative tasks, and less repetitive manual tasks—many jobs, then, will be all about “thinking” machines. 

From managers to janitors, everyone will adopt new ways of doing their jobs with machines in the next 20 years or so to come. One issue that is not clear, however, is whether the technological revolution will create more employment opportunities than it will destroy. 

According to Al Toby Walsh, copying (Al computer) code costs almost zero and takes as much time. He goes on to say that whoever thinks technology will create more job opportunities than it will destroy is lying to themselves because nobody knows for sure. The jobs that AI will create will be different from the ones that will be destroyed, and they will require entirely different skills. 

Hamilton Calder, CEO of Committee for Economic Development Australia, thinks that everyone should learn to code. However, Mr. Charlton disagrees strongly. He is confident that you need not compete with machines to be successful in the future economy. Professor Walsh argues that, even though machines will be far better coders than humans, for geeks, there is a great future in inventing the future. 

It is time that people stopped encouraging the young generation to work towards a ‘dream’ job, says CEO of FYA, Jan Owen. Nobody should focus on an individual job. Instead, people should aim at developing a transferable skill set which includes; digital and financial literacy, project management, collaboration and the ability to carefully evaluate and analyze information. 

Robert Hillard, a managing partner at Deloitte Consulting, believes that future work will be divided into three categories;

•    People who will work for machines like online store pickers and drivers.

•    People who will work with machines like surgeons who will be using the help of machines to diagnose.

•    People who will work on machines like designers and programmers. 

The human-machine teams will unite AI algorithms with human skills like emotional intelligence and judgment. According to Mr. Hillard, jobs will increase, but they probably will not be better. Those that will be working for the machines will have the most difficult time. 

Yes, being human is a skill that you could leverage for income. Computers barely have emotional intelligence. The social jobs that need emotional intelligence (marketing jobs, being a nurse, being a psychologist) are safe. 

In the future, being human could be a job by giving services that machines cannot give—services in the caring economy, such as being empathetic. Some of these unpaid volunteering jobs could become “service jobs of love” in future. 

Computers are not creative or imaginative. Surprisingly, some of the oldest jobs ever like being an artisan or a carpenter will be the most valuable ones. People would rather see something carved by a human as opposed to a machine. 

Even with all the preparedness for future work, Mr. Dawson thinks that everyone should plan for themselves. Develop the skills that will be needed and always pay attention.

Rise of Automation – Technology and Robots Will Replace Humans

Elon Musk’s Tweet Gives Creepy Insight Into Future Of Humanoid Robots

Elon Musk’s predictions about robots is the stuff of nightmares. 


Twitter user Alex Medina captioned a promotional video of Boston Dynamics’ Atlas robot doing front flips and jumps in an obstacle course with the panicked caption: “We dead.” Musk responded to Medina by essentially telling him to buckle in for a lot more terrifying features to these humanoid robot advancements. 


“This is nothing. In a few years, that bot will move so fast you’ll need a strobe light to see it. Sweet dreams…,” Musk wrote. 



The Atlas robot is marketed as the “world’s most dynamic humanoid,” but even that impressive advancement might soon be eclipsed by artificial intelligence. The Tesla CEO followed up his creepy comment with a warning about the future of leaving such technology unchecked.


“Got to regulate AI/robotics like we do food, drugs, aircraft & cars,” Musk tweeted.

“Public risks require public oversight. Getting rid of the FAA [wouldn’t] make flying safer. They’re there for good reason.”


So Musk is basically backing up every horror movie theory about artificial intelligence and robots ever. And this isn’t the first time the tech mogul has warned about these types of advancements. Musk said in September that artificial intelligence will probably be the spark that ignites a world war. 


“China, Russia, soon all countries w strong computer science,” Musk wrote on Twitter in September. “Competition for AI superiority at national level most likely cause of WW3 imo.” 


Well, hopefully Will Smith can protect humanity in the upcoming robot apocalypse. 



AI will obliterate half of all jobs, starting with white collar, says ex-Google China president

The upcoming worldwide workforce reckoning that artificial intelligence is expected to bring will happen much sooner than many experts predict, the former president of Google China told CNBC on Monday.

Kai-Fu Lee, now chairman and CEO of Sinovation Ventures, believes that about half of all jobs will disappear over the next decade and be replaced with AI and the next generation of robots in the fastest period of disruption in history.

“AI, at the same time, will be a replacement for blue collar and white collar jobs,” said Lee, a renowned Chinese technologist and investor who held positions at Apple and Microsoft in addition to Alphabet’s Google. But white collar jobs will go first, he warned.


“The white collar jobs are easier to take because they’re pure a quantitative analytical process. Reporters, traders, telemarketing, telesales, customer service, [and] analysts, there can all be replaced by a software,” he explained on “Squawk Box.” “To do blue collar, some of work requires hand-eye coordination, things that machines are not yet good enough to do.”

“The white collar jobs are easier to take because they’re pure a quantitative analytical process”-Kai-Fu Lee, Chairman and CEO of Sinovation Ventures

Lee knocked down an argument that the jobs lost will create new ones to service and program AI and robots. “Robots are clearly replacing people jobs. They’re working 24 by 7. They are more efficient. They need some programming. But one programer can program 10,000 robots.”

Besides taking jobs beyond factory floors, robots and AI are already starting to takeover some of the mundane tasks around people’s homes. Lee pointed to the Amazon Echo as an example.

“The robots don’t have to be anthropomorphized. They can just be an appliance,” he said. “The car that has autonomous driving is not going to have a humanoid person [driving]. It’s just going to be without a steering wheel.”

Lee said that while economic growth “will go dramatically up because AI can do so many things so much more faster” than humans, it’ll force everyone to rethink the practical and social impact of fewer jobs. “If a lot of people will find happiness without working, that would be a happy outcome.”

But in a Washington Post op-ed last month, Lee argued against universal basic income, the idea of governments providing a steady stipend to help each citizen make ends meet regardless of need, employment status, or skill level. UBI is being bandied about as a possible solution to an economy that won’t have nearly enough jobs for working-age adults.

“The optimists naively assume that UBI will be a catalyst for people to reinvent themselves professionally,” he wrote. It may work among Silicon Valley and other highly motivated entrepreneurs, he added, “but this most surely will not happen for the masses of displaced workers with obsolete skills, living in regions where job loss is exacerbated by traditional economic downturn.”

Lee sees a different plan of action. “Instead of just redistributing cash and hoping for the best … we need to retrain and adapt so that everyone can find a suitable profession.”

Some of the solutions he offered in his commentary include developing more jobs that require social skills such as social workers, therapists, teachers, and life coaches as well as encouraging people to volunteer and considering paying them.

Lee wrote, “We need to redefine the idea of work ethic for the new workforce paradigm. The importance of a job should not be solely dependent on its economic value but should also be measured by what it adds to society.”

“We should also reassess our notion that longer work hours are the best way to achieve success,” he concluded.



The movie ‘Hidden Figures’ can teach us how to keep jobs in an AI future

This weekend, I finally watched “Hidden Figures.” I took my 9-year-old daughter with me to witness how instrumental women of color were to the success of several NASA missions — something that historically has been associated with white male achievement. If you have not seen it yet, I highly recommend it. The acting is superb, and the story offers so much education, both on race relations and women in the workplace.

What I want to focus on is possibly something the director and the cast never imagined could matter. I do — not because it is the most important aspect but simply because it is very relevant to the tech transition we are experiencing right now.


All the talk surrounding artificial intelligence is as much about the technology itself as it is about the impact its adoption will have on different aspects of our lives. Business models in the automotive industry, insurance business, public transportation, search and advertising, as well as more personal consequences, such as human-to-human interaction, sources of knowledge and education. Change will not come overnight, but we had better be prepared, because it will come.

New tech requires new skills

Change came in 1962 for the segregated West Area Computer Division of Langley Research Center in Virginia, where the three women who are the main protagonists of the story worked. Mathematician Katherine Goble and de facto supervisor Dorothy Vaughan are both directly affected by new tech rolling into the facility in the form of the IBM 7090.

If you are not familiar with the IBM 7090 (I was not before this weekend), it was the third member of the IBM 700/7000 series of computers designed for large-scale scientific and technological applications. In layman’s terms, the 7090 would be able to perform in a blink of an eye all the calculations that took the computer division hours. Dorothy understood the threat and, armed with her wit and a book on programming languages, was able to help program the IBM 7090, taught her team to do the same, shifted their skills and saved their jobs.

I realize that part of this story might be for the benefit of the screenplay, and that the world is much more complicated. However, I do think that what is at the core is very relevant: The creation of new skill sets.

Although AI has the potential to affect not only manual jobs that can be automated but also, theoretically, jobs that require learning and decision making, the immediate threat is certainly on the former.

We focus a lot, and rightly so, on the job loss AI will cause, but we have not yet started to focus on teaching new skills so such losses can be limited. As I said, AI will not magically appear overnight, but we would be fools to think we have plenty of time to create the skills our “augmented” world will require, from new programming languages to new branches of law and insurance, Q&A testing and more. Empowering people with new skills will be key not only to having a job but also to keeping our income at pace with the higher cost these new worlds will entail. Providing a framework for education is a political responsibility as well as a corporate one.

Who will we trust?

The IBM 7090 replaces Katherine when it comes to checking calculations, but just as Friendship 7 is ready to launch, some discrepancies arise in the electronic calculations for the capsule’s recovery coordinates. Astronaut John Glenn asks the director of the Space Task Group to have Katherine recheck the numbers. When Katherine confirms the coordinates, Glenn thanks the director saying: “You know, you cannot trust something you cannot look in the eyes.”



An AI detected colorectal cancer with 86 percent accuracy

A new computer-aided endoscopic system that probes for signs of tumor or cancer growth in the colon may very well be the future of cancer detection. Assisted by artificial intelligence (AI), the new diagnostic system is able to tell if clumps of cells, called colorectal polyps, that grow along the walls of the colon are benign tumors known as colorectal adenoma.

The computer-assisted diagnostic system was trained using more than 30,000 images of colorectal polyps, each magnified 500-times, and operates using machine learning. The AI can check approximately 300 features of the polyp, which it compares to its existing “knowledge”, in less than a second. After having been used successful in preliminary studies, prospective trials followed. The results of these trials, the first for AI-assisted endoscopy in a clinical setting, were presented at the 25th UEG Week in Barcelona, Spain.

The prospective study was conducted by a team led by Dr. Yuichi Mori from Showa University in Yokohama, Japan. Mori and his colleagues tested the new system in 250 patients previously identified to have colorectal polyps. The AI predicted the pathology of each polyp, comparing it with final pathological reports taken from the resected specimens. The results were highly encouraging — the system assessed 306 polyps in real-time, with a 94 percent sensitivity, 79 percent specificity, and 86 percent accuracy. In identifying abnormal tissue growth, the system demonstrated 79 percent positive and 93 percent negative predictive values.


In short, the AI was able to fairly accurately identify which abnormal colon cell growths were most likely to be cancerous. “The most remarkable breakthrough with this system is that artificial intelligence enables real-time optical biopsy of colorectal polyps during colonoscopy, regardless of the endoscopists’ skill,” Mori said, speaking during the Opening Plenary at the UEG Week. “This allows the complete resection of adenomatous polyps and prevents unnecessary polypectomy of non-neoplastic polyps.”

Furthermore, the researchers presented the results of their prospective study to prove that their system was ready for clinical trials.”We believe these results are acceptable for clinical application and our immediate goal is to obtain regulatory approval for the diagnostic system,” Mori added.

While this may be the first AI-enabled, real-time biopsy, as Mori described it, it’s not the first time AI has been used to improve medical diagnosis and overall medical research. For example, there is an AI effective in identifying skin cancer, and chipmaker NVIDIA is working on a moonshot project to accelerate cancer research using deep learning. Also working in cancer diagnosis is IBM’s Watson, which in some cases has proven to be 99 percent accurate in recommending the same treatments as doctors. Improved cancer detection can spell the difference between a treatment that works and one that doesn’t, so these advancements are potentially life-saving.

Moving forward, Mori’s team plans to conduct a multicenter study to aid eventual clinical tests. They’re also working on an automatic polyp detection system. “Precise on-site identification of adenomas during colonoscopy contributes to the complete resection of neoplastic lesions” said Dr Mori. “This is thought to decrease the risk of colorectal cancer and, ultimately, cancer-related death.”



Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent

Silicon Valley’s start-ups have always had a recruiting advantage over the industry’s giants: Take a chance on us and we’ll give you an ownership stake that could make you rich if the company is successful.

Now the tech industry’s race to embrace artificial intelligence may render that advantage moot — at least for the few prospective employees who know a lot about A.I.

Tech’s biggest companies are placing huge bets on artificial intelligence, banking on things ranging from face-scanning smartphones and conversational coffee-table gadgets to computerized health care and autonomous vehicles. As they chase this future, they are doling out salaries that are startling even in an industry that has never been shy about lavishing a fortune on its top talent.

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete.

At the top end are executives with experience managing A.I. projects. In a court filing this year, Google revealed that one of the leaders of its self-driving-car division, Anthony Levandowski, a longtime employee who started with Google in 2007, took home over $120 million in incentives before joining Uber last year through the acquisition of a start-up he had co-founded that drew the two companies into a court fight over intellectual property.

Salaries are spiraling so fast that some joke the tech industry needs a National Football League-style salary cap on A.I. specialists. “That would make things easier,” said Christopher Fernandez, one of Microsoft’s hiring managers. “A lot easier.”

There are a few catalysts for the huge salaries. The auto industry is competing with Silicon Valley for the same experts who can help build self-driving cars. Giant tech companies like Facebook and Google also have plenty of money to throw around and problems that they think A.I. can help solve, like building digital assistants for smartphones and home gadgets and spotting offensive content.

Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can. Solving tough A.I. problems is not like building the flavor-of-the-month smartphone app. In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal.

“What we’re seeing is not necessarily good for society, but it is rational behavior by these companies,” said Andrew Moore, the dean of computer science at Carnegie Mellon University, who previously worked at Google. “They are anxious to ensure that they’ve got this small cohort of people” who can work on this technology.

Costs at an A.I. lab called DeepMind, acquired by Google for a reported $650 million in 2014, when it employed about 50 people, illustrate the issue. Last year, according to the company’s recently released annual financial accounts in Britain, the lab’s “staff costs” as it expanded to 400 employees totaled $138 million. That comes out to $345,000 an employee.

“It is hard to compete with that, especially if you are one of the smaller companies,” said Jessica Cataneo, an executive recruiter at the tech recruiting firm CyberCoders.


IBM and MIT pen 10-year, $240M AI research partnership

IBM and MIT came together today to sign a 10-year, $240 million partnership agreement that establishes the MIT-IBM Watson AI Lab at the prestigious Cambridge, MA academic institution.

The lab will be co-chaired by Dario Gil, IBM Research VP of AI and Anantha P. Chandrakasan, dean of MIT’s School of Engineering.

Big Blue intends to invest $240 million into the lab where IBM researchers and MIT students and faculty will work side by side to conduct advanced AI research. As to what happens to the IP that the partnership produces, the sides were a bit murky about that.

This much we know: MIT plans to publish papers related to the research, while the two parties plan to open source a good part of the code. Some of the IP will end up inside IBM products and services. MIT hopes to generate some AI-based startups as part of the deal too.

“The core mission of joint lab is to bring together MIT scientists and IBM [researchers] to shape the future of AI and push the frontiers of science,” IBM’s Gil told TechCrunch.

To that end, the two parties plan to put out requests to IBM scientists and the MIT student community to submit ideas for joint research. To narrow the focus of what could be a broad endeavor, they have established a number of principles to guide the research.


This includes developing AI algorithms with goal of getting beyond specific applications for neural-based deep learning networks and finding more generalized ways to solve complex problems in the enterprise.

Secondly, they hope to harness the power of machine learning with quantum computing, an area that IBM is working hard to develop right now. There is tremendous potential for AI to drive the development of quantum computing and conversely for quantum computing and the computing power it brings to drive the development of AI.

With IBM’s Watson Security and Healthcare divisions located right down the street from MIT in Kendall Square, the two parties have agreed to concentrate on these two industry verticals in their work. Finally, the two teams plan to work together to help understand the social and economic impact of AI in society, which as we have seen has already proven to be considerable.

While this is a big deal for both MIT and IBM, Chandrakasan made clear that the lab is but one piece of a broader campus-wide AI initiative. Still, the two sides hope the new partnership will eventually yield a number of research and commercial breakthroughs that will lead to new businesses both inside IBM and in the Massachusetts startup community, particularly in the healthcare and cybersecurity areas.


IBM and MIT pen 10-year, $240M AI research partnership

Elon Musk Predicts The Cause Of World War III

Elon Musk has a prediction about the cause of World War III, and it’s not President Donald Trump and may not even involve humans at all.  

The head of Tesla and SpaceX on Monday shared a link on Twitter to a report about Russian President Vladimir Putin discussing artificial intelligence:

“Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin was quoted as saying. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”


Musk added:

By comparison, Musk said, the saber-rattling from North Korea wasn’t much to worry about. 

One Twitter follower suggested that private companies, rather than governments, were far better at artificial intelligence. 

Musk replied:

He also apologized for the glum tweets, saying he was depressing himself, and promised: “Fun, exciting tweets coming soon!”



Putin says the country that perfects AI will be ‘ruler of the world’

Forget the arms race or space race — the new battle for technological dominance revolves around AI, according to Vladimir Putin. The Russian President told students at a career guidance forum that the “future belongs to artificial intelligence,” and whoever is first to dominate this category will be the “ruler of the world.” In other words, Russia fully intends to be a front runner in the AI space. It won’t necessarily hog its technology, though.


Putin maintains that he doesn’t want to see anyone “monopolize” the field, and that Russia would share its knowledge with the “entire world” in the same way it shares its nuclear tech. We’d take this claim with a grain of salt (we wouldn’t be surprised if Russia held security-related AI secrets close to the vest), but this does suggest that the country might share some of what it learns.

Not that this reassuring long-term AI skeptic Elon Musk. The entrepreneur believes that the national-level competition to lead AI will be the “most likely cause of WW3.” And it won’t even necessarily be the fault of overzealous leaders. Musk speculates that an AI could launch a preemptive strike if it decides that attacking first is the “most probable path to victory.” Hyperbolic? Maybe (you wouldn’t be the first to make that claim). It assumes that countries will put AI in charge of high-level decision making, Skynet-style, and that they might be willing to go to war over algorithms. Still, Putin’s remarks suggest that his concern has at least some grounding in reality — national pride is clearly at stake.



Facebook AI learns human reactions after watching hours of Skype

There’s something not quite right about humanoid robots. They are cute up to a point, but once they become a bit too realistic, they often start to creep us out – a foible called the uncanny valley. Now Facebook wants robots to climb their way out of it.

Researchers at Facebook’s AI lab have developed an expressive bot, an animation controlled by an artificially intelligent algorithm. The algorithm was trained on hundreds of videos of Skype conversations, so that it could learn and then mimic how humans adjust their expressions in response to each other. In tests, it successfully passed as human-like.

To optimize its learning, the algorithm divided the human face into 68 key points that it monitored throughout each Skype conversation. People naturally produce nods, blinks and various mouth movements to show they are engaged with the person they are talking to, and eventually the system learned to do this too.


The bot was then able to look at a video of a human speaking, and choose in real time what the most appropriate facial response would be. If the person was laughing, for example, the bot might choose to open its mouth too, or tilt its head.

The Facebook team then tested the system with panels of people who watched animations that included both the bot reacting to a human, and a human reacting to a human. The volunteers judged the bot and the human to be equally natural and realistic.

However, as the animations were quite basic, it’s not clear whether a humanoid robot powered by this algorithm would have natural-seeming reactions.

Additionally, learning the basic rules of facial communication might not be enough to create truly realistic conversation partners, says Goren Gordon at Tel Aviv University in Israel. “Actual facial expressions are based on what you are thinking and feeling.”



The world’s top artificial intelligence companies are pleading for a ban on killer robots

A revolution in warfare where killer robots, or autonomous weapons systems, are common in battlefields is about to start.

Both scientists and industry are worried.

The world’s top artificial intelligence (AI) and robotics companies have used a conference in Melbourne to collectively urge the United Nations to ban killer robots or lethal autonomous weapons.

An open letter by 116 founders of robotics and artificial intelligence companies from 26 countries was launched at the world’s biggest artificial intelligence conference, the International Joint Conference on Artificial Intelligence (IJCAI), as the UN delays meeting until later this year to discuss the robot arms race.

Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, released the letter at the opening of the opening of the conference, the world’s pre-eminent gathering of experts in artificial intelligence and robotics.


The letter is the first time that AI and robotics companies have taken a joint stand on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal talks on autonomous weapons. Of these, 19 have already called for a ban.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter says.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Signatories of the 2017 letter include:

  • Elon Musk, founder of Tesla, SpaceX and OpenAI (US)
  • Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
  • Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
  • Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
  • Jü rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
  • Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)

Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons.

The 2015 letter was signed by thousands of researchers working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” says Walsh.

“It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.”

Ryan Gariepy, founder of Clearpath Robotics, says the number of prominent companies and individuals who have signed this letter reinforces the warning that this is not a hypothetical scenario but a very real and pressing concern.

“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he says.

“The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”



How AI, AR, and VR are making travel more convenient

From 50 ways to leave your lover, as the song goes, to 750 types of shampoos, we live in an endless sea of choices. And although I haven’t been in the market for hair products in a while, I understand the appeal of picking a product that’s just right for you, even if the decision-making is often agonizing. This quandary (the “Goldilocks Syndrome”, of finding the option that is “just right”) has now made its way to the travel industry, as the race is on to deliver highly personalized and contextual offers for your next flight, hotel room or car rental.

Technology, of course, is both a key driver and enabler of this brave new world of merchandising in the travel business. But this is not your garden variety relational-databases-and-object-oriented-systems tech. What is allowing airlines, hotels and other travel companies to behave more like modern-day retailers is the clever use of self-learning systems, heuristics trained by massive data sets and haptic-enabled video hardware. Machine learning (ML), artificial intelligence (AI), augmented reality (AR) and virtual reality (VR) are starting to dramatically shape the way we will seek and select our travel experiences.

Let every recommendation be right

AI is already starting to change how we search for and book travel. Recent innovation and investment has poured into front-end technologies that leverage machine learning to fine tune search results based on your explicit and implicit preferences. These range from algorithms that are constantly refining how options are ranked on your favorite travel website, to apps on your mobile phone that consider past trips, expressed sentiment (think thumbs up, likes/dislikes, reviews) and volunteered information like frequent traveler numbers.

Business travel, as well, is positioned for the application of AI techniques, even if not all advances are visible to the naked eye. You can take photos of a stack of receipts on your smartphone; optical character recognition software codifies expense amounts and currencies, while machine learning algorithms pick out nuances like categories and spending patterns.

AI is also improving efficiencies in many operational systems that form the backbone of travel. Machine learning is already starting to replace a lot of rule-based probabilistic models in airport systems to optimize flight landing paths to meet noise abatement guidelines, or change gate/ramp sequencing patterns to maximize fuel efficiency.

Making decisions based on reality

VR and AR are still changing and evolving rapidly, with many consumer technology giants publicly announcing products this year we can expect to see rapid early adoption and mainstreaming of these technologies. Just as music, photos, videos and messaging became ubiquitous thanks to embedded capabilities in our phones, future AR and VR applications are likely to become commonplace.

VR offers a rich, immersive experience for travel inspiration, and it is easy to imagine destination content being developed for a VR environment. But VR can also be applied to travel search and shopping. My company, Amadeus, recently demonstrated a seamless flight booking experience that includes seat selection and payment. Virtually “walking” onto an airplane and looking a specific seat you are about to purchase makes it easier for consumers to make informed decisions, while allowing airlines to clearly differentiate their premium offerings.

TechCrunch Disrupt SF 2015 - Day 1

AR will probably have a more immediate impact than VR, however, in part due to the presence of advanced camera, location and sensor technology already available today on higher-end smartphones. Airports are experimenting with beacon technology where an AR overlay would be able to easily and quickly guide you to your tight connection for an onward flight, or a tailored shopping or dining experience if you have a longer layover.

“Any sufficiently advanced technology is indistinguishable from magic,” goes Arthur C. Clarke’s famously quoted third law. But as we expect more authentic experiences: precise search results, an informed booking or an immersive travel adventure, we can count on increasingly magical technology from systems that learn to deliver us our “perfect bowl of porridge.”



Facebook shuts down AI system after it invents own language

In recent weeks, a story about experimental Facebook machine learning research has been circulating with increasingly panicky, Skynet-esque headlines.

“Facebook engineers panic, pull plug on AI after bots develop their own language,” one site wrote. “Facebook shuts down down AI after it invents its own creepy language,” another added. “Did we humans just create Frankenstein?” asked yet another. One British tabloid quoted a robotics professor saying the incident showed “the dangers of deferring to artificial intelligence” and “could be lethal” if similar tech was injected into military robots.


References to the coming robot revolution, killer droids, malicious AIs and human extermination abounded, some more or less serious than others. Continually quoted was this passage, in which two Facebook chat bots had learned to talk to each other in what is admittedly a pretty creepy way.

Bob: I can i i everything else

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

The reality is somewhat more prosaic. A few weeks ago, FastCo Design did report on a Facebook effort to develop a “generative adversarial network” for the purpose of developing negotiation software.

The two bots quoted in the above passage were designed, as explained in a Facebook Artificial Intelligence Research unit blog post in June, for the purpose of showing it is “possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.”


The bots were never doing anything more nefarious than discussing with each other how to split an array of given items (represented in the user interface as innocuous objects like books, hats, and balls) into a mutually agreeable split.


The intent was to develop a chatbot which could learn from human interaction to negotiate deals with an end user so fluently said user would not realize they are talking with a robot, which FAIR said was a success:

“The performance of FAIR’s best negotiation agent, which makes use of reinforcement learning and dialog rollouts, matched that of human negotiators … demonstrating that FAIR’s bots not only can speak English but also think intelligently about what to say.”




Meet Orii, the AI-powered smart Bluetooth ring that will make you feel like a secret agent

Bluetooth earpieces are passé. Origami Labs, a Hong Kong based startup has developed a smart solution for receiving smartphone alerts. Called Orii, the solution is a cross between a ring and a Bluetooth earpiece (to be worn on the finger instead of the ear). The futuristic-looking smart accessory uses bone conduction to aid in communication using just the fingertips and offers voice-control as a bonus.

Bone conduction is not a new technology and has been already explored by the hearing aid market. The audio technology transmits sound directly to the inner ear which also in a manner blocks ambient noise. When applied to the Orii smart ring, users can simply place the fingertip to their ear and discreetly talk over a call or respond to alerts.


The Orii ring also comes in-built with support for voice controls. By saying “Orii”, followed by the command, a user can carry out a range of activities including translation, map routes, calendar, setting alerts, etc. In addition to that, the smart ring also supports both Google Assistant and Siri, making it compatible with both Android and iOS devices. In order to enable the two assistants, one needs to simply long-press the CapSense button on the smart ring to wake up either of the assistants.

Given the personal and discreet nature of the smart device, the thought of having all notifications buzz on the finger is quite a distracting idea. To tackle this, the companion Orii app allows a user to filter out which type of notifications should be sent to the ring and the LED indicators on the top will show the customized color accordingly.



Warehouse Picking Bots Could Soon Replace Expensive Humans

As the consumer demand for e-commerce grows, retailers are counting on robotics to lend a hand. Robotics companies and researchers are working on developing picking robots to select individual items and put them in boxes. It may sound like a small task, but on a scale of online retailer Amazon, the development looks set revolutionize one of the most labor-intensive aspects of e-commerce. Up to now, warehouse “picking” — the act of grabbing an item from a warehouse shelf — was largely done by humans. But that is changing, reports the The Wall Street Journal.


Per the WSJ:

Picking is the biggest labor cost in most e-commerce distribution centers, and among the least automated. Swapping in robots could cut the labor cost of fulfilling online orders by a fifth, said Marc Wulfraat, president of consulting firm MWPVL International Inc.

“When you’re talking about hundreds of millions of units, those numbers can be very significant,” he said. “It’s going to be a significant edge for whoever gets there first.”

So while robots have been able to grab items for a while, it’s proven impractical in warehouses that stock an ever-changing rotation of products. However, more sophisticated robotic arms are being developed that can recognize and respond with tactile differentiation different items, all while amassing data on their experiences to inform future picking practices. Essentially, these picking robots can learn from their collective memory.


For retailers, this could be a game-changer. For some it means the possibility of fully automated warehouses, a “lights-out” scenario; a facility wouldn’t need overhead lamps because the robots wouldn’t. For others, it’s the ability to refine the jobs of human employees and solve a labor shortage that retailers have been dealing with as e-commerce has continued to expand. According to the U.S. Census Bureau, e-commerce revenues reached $390 billion in 2017 — twice as much as 2011.

Hudson’s Bay — a Canadian retail giant that also owns Saks Fifth Avenue — is currently testing a robot made by RightHand Robotics in its Ontario distribution center. The robot has an arm that can pick up different items and put them in various boxes. The company’s latest iteration of their gripper is pretty incredible; it has impressive tactile functions, various robotic finger options and even fingernails for grasping slim objects.

Amazon is eager to advance this technology through what is arguably one of the biggest new robotics competitions. The Amazon Robotics Challenge will be held from July 27 to 30 in Nagoya, Japan, and invites the academic robotic community to share research and pit their picking robots against each other in performance competitions.

The WSJ reports that over the last five years, U.S. warehouses have added 262,000 jobs to a sector that now employs 950,000 people. Whether these robots will have an effect on the jobs of laborers is yet to be seen.



Apple launches machine learning research site

Apple just launched a blog focused on machine learning research papers and sharing the company’s findings. The Apple Machine Learning Journal is a bit empty right now as the company only shared one post about turning synthetic images into realistic ones in order to train neural networks.

This move is interesting as Apple doesn’t usually talk about their research projects. The company has contributed and launched some important open source projects, such as WebKit, the browser engine behind Safari, and Swift, Apple’s latest programming language for iOS, macOS, watchOS and tvOS. But a blog with research papers on artificial intelligence project is something new for Apple.

It’s interesting for a few reasons. First, this research paper has already been published on arXiv. Today’s version talks about the same things but the language is a bit more simple. Similarly, Apple has added GIFs to illustrate the results.

According to this paper, Apple has had to train its neural network to detect faces and other objects on photos. But instead of putting together huge libraries with hundreds of millions of sample photos to train this neural network, Apple has created synthetic images of computer-generated characters and applied a filter to make those synthetic images look real. It was cheaper and faster to train the neural network.


Second, Apple tells readers to email the company in its inaugural post. There’s also a big link in the footer to look at job openings at Apple. It’s clear that Apple plans to use this platform to find promising engineers in that field.


Third, many people have criticized Apple when it comes to machine learning, saying that companies like Google and Amazon are more competent. And it’s true that the company has been more quiet. Some consumer products like Google’s assistant and Amazon’s Alexa are also much better than Apple’s Siri.

But Apple has also been doing great work when it comes to analyzing your photo library on your device, the depth effect on the iPhone 7 Plus and the company’s work on augmented reality with ARkit. Apple wants to correct this narrative.


Apple launches machine learning research site

Google’s DeepMind Turns to Canada for Artificial Intelligence Boost

Google’s high-profile artificial intelligence unit has a new Canadian outpost.

DeepMind, which Google bought in 2014 for roughly $650 million, said Wednesday that it would open a research center in Edmonton, Canada. The new research center, which will work closely with the University of Alberta, is the United Kingdom-based DeepMind’s first international AI research lab.


DeepMind, now a subsidiary of Google parent company Alphabet (GOOG, +1.49%), recruited three University of Alberta professors from to lead the new research lab. The professors—Rich Sutton, Michael Bowling, and Patrick Pilarski—will maintain their positions at the university while working at the new research office.

Sutton, in particular, is a noted expert in a subset of AI technologies called reinforcement learning and was an advisor to DeepMind in 2010. With reinforcement learning, computers look for the best possible way to achieve a particular goal, and learn from each time they fail.

DeepMind has popularized reinforcement learning in recent years through its AlphaGo program that has beat the world’s top players in the ancient Chinese board game, Go. Google has also incorporated some of the reinforcement learning techniques used by DeepMind in its data centers to discover the best calibrations that result in lower power consumption.

“DeepMind has taken this reinforcement learning approach right from the very beginning, and the University of Alberta is the world’s academic leader in reinforcement learning, so it’s very natural that we should work together,” Sutton said in a statement. “And as a bonus, we get to do it without moving.”

DeepMind has also been investigated by the United Kingdom’s Information Commissioner’s Office for failing to comply with the United Kingdom’s Data Protection Act as it expands to using its technology in the healthcare space.

ICO information commissioner Elizabeth Denham said in a statement on Monday that the office discovered a “number of shortcomings” in the way DeepMind handled patient data as part of a clinical trial to use its technology to alert, detect, and diagnosis kidney injuries. The ICO claims that DeepMind failed to explain to participants how it was using their medical data for the project.

DeepMind said Monday that it “underestimated the complexity” of the United Kingdom’s National Health Service “and of the rules around patient data, as well as the potential fears about a well-known tech company working in health.” DeepMind said it would be now be more open to the public, patients, and regulators with how it uses patient data.

“We were almost exclusively focused on building tools that nurses and doctors wanted, and thought of our work as technology for clinicians rather than something that needed to be accountable to and shaped by patients, the public and the NHS as a whole,” DeepMind said in a statement. “We got that wrong, and we need to do better.”



Microsoft’s next big Windows update will use AI to fight malware

Windows Fall Creators Update will come with a hefty serving of security upgrades, made timely by the increasingly rampant cyberattacks targeting the platform these days. In a blog post, Microsoft has revealed how the upcoming major update will level up Windows Defender Advanced Threat Protection, a Win 10 enterprise service that flags early signs of infection. According to CNET, Windows enterprise director Rob Lefferts said the upgrade will use data from Redmond’s cloud-based services to create an AI anti-virus that will make ATP much better at preventing cyberattacks.

One of the AI’s features is the ability to instantly pick up the presence of a previously unknown malware on a computer. Microsoft can then quickly quarantine the malware in the cloud and create a signature for its identity that can be used to protect other computers from it. Lefferts says about 96 percent of cyberattacks use new malware, so this feature sounds especially helpful. It could certainly change the way Microsoft rolls out defense measures, since it currently takes researchers hours to conjure one up. By the time they’re done, the malware might have already made its way to more computers.

While ATP’s new security features will initially only be available to enterprise customers, CNET says Microsoft has plans to roll them out to ordinary users. In addition, the company wants ATP to support “more platforms beyond Windows” and has begun working to make that happen. Microsoft will release Fall Creators’ preview between September and October, so these features (and more) will start hitting some businesses’ and companies’ PCs around that time.



Google’s AI Vision May No Longer Include Giant Robots

Good news for the deeply paranoid among us: If the apocalypse arrives via giant anthropomorphic robots, they probably won’t be bankrolled by Google. On Thursday, Google’s parent company, Alphabet, announced that it was selling Boston Dynamics, its premier robotics division, to the Japanese telco giant SoftBank for an undisclosed sum. The deal also includes a smaller robotics company called Schaft.

Boston Dynamics was less a moonshot than a sci-fi horror brought to life. Even before being acquired by Google in 2013, the 25-year-old company had already developed a Beast Wars–style squadron of robot predators with names like BigDog and WildCat, as well as a humanoid model called Atlas. The machines were often developed for the Pentagon under contracts with agencies such as the Defense Advanced Research Projects Agency. Google and the government both said the robots were being tested for disaster-relief scenarios, but that never stopped the stream of headlines describing them as “scary,” “nightmare-inducing,” or “evil.”

Whether Google’s ultimate plans were benign or nefarious, they never properly got off the ground. Both Boston Dynamics and Schaft were part of a months-long spending spree Google bankrolled to appease Andy Rubin, the creator of Android, who was looking to robots as his next frontier for innovation. But Rubin left Google in 2014, creating a leadership vacuum as the company struggled to get its various robotics acquisitions headquartered around the world to work in tandem. Under Rubin, Google reportedly had plans to launch a consumer robotics product by 2020, but that timeline seems in doubt now. (Alphabet still owns several smaller robotics startupsthat specialize in areas such as industrial manufacturing and film production.)

In the years since the Boston Dynamics acquisition, Google has shown that it doesn’t need to build a robot butler (or soldier) to create a future dominated by artificial intelligence. Machine-learning algorithms now guide most of the company’s products, whether recommending YouTube videos, identifying objects in users’ photo libraries, or whisking people around in driverless cars. The company is partnering with appliance manufacturers like General Electric so that people can control their ovens via voice commands to Google Home. And most ambitiously, at this year’s Google I/O, the company unveiled a suite of new products related to its machine-learning framework, TensorFlow. Developers will soon be able to make use of the same AI engines that power Google’s products to improve their own offerings via the company’s cloud-computing platform.

In the company’s ideal future, every human-machine interaction will be powered by Google, even if a specific app or appliance doesn’t have Google’s name on it. Terminator-style robots (OK, hopefully Jetsons-style) may one day be part of that vision, but the company can easily build an AI army with the products that fill our homes and garages today.



Musk predicts AI will be better than humans at everything in 2030

In response to an article by New Scientist predicting that artificial intelligence will be able to beat humans at everything and anything by 2060, Elon Musk replied that he believed the milestone would be much sooner – around 2030 to 2040.

The New Scientist Study based its story from a survey of more than 350 AI researchers who believe there is a 50% chance that AI will outperform humans in all tasks within 45 years.

At a high level, the data is not shocking, but more of an interesting tidbit from the future. Dive into the details of when those very same AI experts believe machines will be better at specific tasks than humans and things get a little creepy. Experts believe they will be better at translating languages than humans by 2024 – something that is already being done on-the-fly by Google for webpages and for spoken word via Google Translate.

High school students everywhere will be outclassed by AI that is estimated to outperform them in essay writing by 2026. AI moves in to takeover truck driving by 2027 thought we believe this will happen much sooner based on the progress Tesla is making with autonomous driving. Tesla has a fully autonomous cross-country trip planned for later this year that, if successful, will pave the way for autonomous vehicle technology to go mainstream.


The estimates get stranger with AI predicted to be able to write a bestselling book better than humans by 2049 and to perform extremely complex, dynamic surgery by 2053. All human jobs are expected to be automated within 120 years which is admittedly quite a bit farther out than 2060 but that is representative of the long tail of increasingly smaller tasks.

Elon is not all rainbows and sunshine with AI which is why he created the non-profit OpenAI organization. He co-founded the organization specifically to map out a path forward for AI research and development, and to ensure that AI is created in an intentional and safe manner.

OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.

While the individual tasks or groups of tasks that comprise each automated industry from trucking to making tacos at your local taqueria, OpenAI is looking beyond that to the first Artificial General Intelligence. This is an intelligence that will have the ability to adapt dynamically to a situation, learn new tasks, creatively apply itself to the new conditions and to perform much like a human would. OpenAI believes that a dynamic AGI will far surpass the AI implemented in any specific industry and will be a game-changer in AI packing the power to change the world in ways we never imagined.

With that goal in mind, OpenAI is pushing the envelope in an attempt to define the cutting edge of AI and to thereby earn the right to define the future of AI for the world. As famed computer scientist Alan Kay once said, “The best way to predict the future is to invent it.”

Elon surely has his finger on the pulse of AI and believes that it is highly likely that it will have a massive impact on humanity. OpenAI carries this belief forward, stating that,

Artificial general intelligence (AGI) will be the most significant technology ever created by humans.

Though Elon is confident AI is moving forward at a far faster pace than scientists believe and is actively work to shape its future, he still fears the technology.


Google wants AI to manage my relationships, and that might be a good thing

When Google said that not sharing photographs of your friends made you “kind of a terrible person” at this year’s I/O keynote, I bristled. The idea that its new Google Photos app would automatically suggest I share pictures with specific people sounded dystopian, especially because so much of the keynote seemed geared toward getting Google’s AI systems to help maintain relationships. Want to answer an email without even thinking about it? Inbox’s suggested responses are rolling out all over Gmail. Has a special moment with somebody slipped your mind? Google might organize photos from it into a book and suggest you have it printed.


Google is far from the first company to do this; Facebook suggests pictures to share and reminds you of friends’ birthdays all the time, for example. It’s easy to describe these features as creepy false intimacy, or say that they’re making us socially lazy, relieving us of the burden of paying attention to people. But the more I’ve thought about it, the more I’ve decided that I’m all right with an AI helping manage my connections with other people — because otherwise, a lot of those connections wouldn’t exist at all.

I don’t know if I’m a terrible person per se, but I may be the world’s worst relative. I have an extended network of aunts, uncles, cousins, and family friends that I would probably like but don’t know very well, and almost never see face-to-face. They’re the kind of relationships that some people I know maintain with family newsletters, emailed photos, and holiday cards. But I have never figured out how to handle any of these things.




Facebook’s new research tool is designed to create a truly conversational AI

Most of us talk to our computers on a semi-regular basis, but that doesn’t mean the conversation is any good. We ask Siri what the weather is like, or tell Alexa to put some music on, but we don’t expect sparkling repartee — voice interfaces right now are as sterile as the visual interface they’re supposed to replace. Facebook, though, is determined to change this: today it unveiled a new research tool that the company hopes will spur progress in the march to create truly conversational AI.

The tool is called ParlAI (pronounced like Captain Jack Sparrow asking to parley) and is described by the social media network as a “one-stop shop for dialog research.” It gives AI programmers a simple framework for training and testing chatbots, complete with access to datasets of sample dialogue, and a “seamless” pipeline to Amazon’s Mechanical Turk service. This latter is a crucial feature, as it means programmers can easily hire humans to interact with, test, and correct their chatbots.

Abigail See, a computer science PhD at Stanford University welcomed the news, saying frameworks like this were “very valuable” to scientists. “There’s a huge volume of AI research being produced right now, with new techniques, datasets and results announced every month,” said See in an email to The Verge. “Platforms [like ParlAI] offer a unified framework for researchers to easily develop, compare and replicate their experiments.”

In a group interview, Antoine Bordes from Facebook’s AI research lab FAIR said that ParlAI was designed to create a missing link in the world of chatbots. “Right now there are two types of dialogue systems,” explains Bordes. The first, he says, are those that “actually serve some purpose” and execute an action for the user (e.g., Siri and Alexa); while the second serves no purpose, but is actually entertaining to talk to (like Microsoft’s Tay — although, yes, that one didn’t turn out great).


“What we’re after with ParlAI, is more about having a machine where you can have multi-turn dialogue; where you can build up a dialogue and exchange ideas,” says Bordes. “ParlAI is trying to develop the capacity for chatbots to enter long-term conversation.” This, he says, will require memory on the bot’s part, as well as a good deal of external knowledge (provided via access to datasets like Wikipedia), and perhaps even an idea of how the user is feeling. “In that respect, the field is very preliminary and there is still a lot of work to do,” says Bordes.

It’s important to note that ParlAI isn’t a tool for just anyone. Unlike, say, Microsoft’s chatbot frameworks, this is a piece of kit that’s aimed at the cutting-edge AI research community, rather than developers trying to create a simple chatbot for their website. It’s not so much about building actual bots, but finding the best ways to train them in the first place. There’s no doubt, though, that this work will eventually filter through to Facebook’s own products (like its part-human-powered virtual assistant M) and to its chatbot platform for Messenger.




Google’s AI Invents Sounds Humans Have Never Heard Before

JESSE ENGEL IS playing an instrument that’s somewhere between a clavichord and a Hammond organ—18th-century classical crossed with 20th-century rhythm and blues. Then he drags a marker across his laptop screen. Suddenly, the instrument is somewhere else between a clavichord and a Hammond. Before, it was, say, 15 percent clavichord. Now it’s closer to 75 percent. Then he drags the marker back and forth as quickly as he can, careening though all the sounds between these two very different instruments.

“This is not like playing the two at the same time,” says one of Engel’s colleagues, Cinjon Resnick, from across the room. And that’s worth saying. The machine and its software aren’t layering the sounds of a clavichord atop those of a Hammond. They’re producing entirely new sounds using the mathematical characteristics of the notes that emerge from the two. And they can do this with about a thousand different instruments—from violins to balafons—creating countless new sounds from those we already have, thanks to artificial intelligence.

Engel and Resnick are part of Google Magenta—a small team of AI researchers inside the internet giant building computer systems that can make their own art—and this is their latest project. It’s called NSynth, and the team will publicly demonstrate the technology later this week at Moogfest, the annual art, music, and technology festival, held this year in Durham, North Carolina.

The idea is that NSynth, which Google first discussed in a blog post last month, will provide musicians with an entirely new range of tools for making music. Critic Marc Weidenbaum points out that the approach isn’t very far removed from what orchestral conductors have done for ages—“the blending of instruments is nothing new,” he says—but he also believes that Google’s technology could push this age-old practice into new places. “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead,” he says.

The Boundaries of Sound

Magenta is part of Google Brain, the company’s central AI lab, where a small army of researchers are exploring the limits of neural networks and other forms of machine learning. Neural networks are complex mathematical systems that can learn tasks by analyzing large amounts of data, and in recent years they’ve proven to be an enormously effective way of recognizing objects and faces in photos, identifying commands spoken into smartphones, and translating from one language to another, among other tasks. Now the Magenta team is turning this idea on its head, using neural networks as a way of teaching machines to make new kinds of music and other art.




Google’s New AI Tool Turns Your Selfies Into Emoji

Machine learning and artificial intelligence have, for a couple years, been hailed as the death knell to almost everything you can imagine: The information we consume, the way we vote, the jobs we have, and even our very existence as a species. (Food for thought: The stuff about ML taking over Homo sapiens totally makes sense, even if you haven’t just taken a huge bong rip.) So maybe it’s welcome news that the newest application of ML from Google, worldwide leaders in machine learning, isn’t to build a new Mars rover or a chatbot that can replace your doctor. Rather, its a tool that anyone can use to generate custom emoji stickers of themselves.


It lives inside of Allo, Google’s ML-driven chat app. Starting today, when you pull up the list of stickers you can use to respond to someone, there’s a simple little option: “Turn a selfie into stickers.” Tap, and it prompts you to take a selfie. Then, Google’s image-recognition algorithms analyze your face, mapping each of your features to those in a kit illustrated by Lamar Abrams, a storyboard artist, writer, and designer for the critically acclaimed Cartoon Network series Steven Universe. There are, of course, literally hundreds of eyes and noses and face shapes and hairstyles and glasses available. All told, Google thinks there are 563 quadrillion faces that the tool could generate. Once that initial caricature is generated, you can then make tweaks: Maybe change your hair, or give yourself different glasses. Then, the machine automatically generates 22 custom stickers of you.

The tool originated with an internal research project to see if ML could be used to generate an instant cartoon of someone, using just a selfie. But as Jason Cornwell, who leads UX for Google’s communication projects, points out, making a cartoon of someone isn’t much of an end goal. “How do you make something that doesn’t just convey what you look like but how you want to project yourself?” asks Cornwell. “That’s an interesting problem. It gets to ML and computer vision but also human expression. That’s where Jennifer came in. To provide art direction about how you might convey yourself.”

Cornwell is referring to Jennifer Daniel, the vibrant, well-known art director who first made her name for the zany, hyper-detailed infographics she created for Bloomberg Businessweek in the Richard Turley era, and then did a stint doing visual op-eds for the New York Times. As Daniel points out, “Illustrations let you bring emotional states in a way that selfies can’t.” Selfies are, by definition, idealizations of yourself. Emoji, by contrast, are distillations and exaggerations of how you feel. To that end, the emoji themselves are often hilarious: You can pick one of yourself as a slice of pizza, or a drooling zombie. “The goal isn’t accuracy,” explains Cornwell. “It’s to let someone create something that feels like themselves, to themselves.” So the user testing involved asking people to generate their own emoji and then asking questions such as: “Do you see yourself in this image? Would your friends recognize you?”




Facebook created a faster, more accurate translation system using artificial intelligence

Facebook’s billion-plus users speak a plethora of languages, and right now, the social network supports translation of over 45 different tongues. That means that if you’re an English speaker confronted with German, or a French speaker seeing Spanish, you’ll see a link that says “See Translation.”

But Tuesday, Facebook announced that its machine learning experts have created a neural network that translates language up to nine times faster and more accurately than other current systems that use a standard method to translate text.

The scientists who developed the new system work at the social network’s FAIR group, which stands for Facebook A.I. Research.

“Neural networks are modeled after the human brain,” says Michael Auli, of FAIR, and a researcher behind the new system. One of the problems that a neural network can help solve is translating a sentence from one language to another, like French into English. This network could also be used to do tasks like summarize text, according to a blog item posted on Facebook about the research.



But there are multiple types of neural networks. The standard approach so far has been to use recurrent neural networks to translate text, which look at one word at a time and then predict what the output word in the new language should be. It learns the sentence as it reads it. But the Facebook researchers tapped a different technique, called a convolutional neural network, or CNN, which looks at words in groups instead of one at a time.

“It doesn’t go left to right,” Auli says, of their translator. “[It can] look at the data all at the same time.” For example, a convolutional neural network translator can look at the first five words of a sentence, while at the same time considering the second through sixth words, meaning the system works in parallel with itself.

Graham Neubig, an assistant professor at Carnegie Mellon University’s Language Technologies Institute, researches natural language processing and machine translation. He says that this isn’t the first time this kind of neural network has been used to translate text, but that this seems to be the best he’s ever seen it executed with a convolutional neural network.

“What this Facebook paper has basically showed—it’s revisiting convolutional neural networks, but this time they’ve actually made it really work very well,” he says.

Facebook isn’t yet saying how it plans to integrate the new technology with its consumer-facing product yet; that’s more the purview of a department there call the applied machine learning group. But in the meantime, they’ve released the tech publicly as open-source, so other coders can benefit from it

That’s a point that pleases Neubig. “If it’s fast and accurate,” he says, “it’ll be a great additional contribution to the field.”



This startup’s ‘software robots’ are taking the jobs of low-skilled office workers

The $30m raised last week by UiPath, which builds apps to automate repetitive office work, is the largest investment a Romanian startup has ever received.

Its tools are used by leading companies working in financial services, insurance, and healthcare, and each software robot license can replace up to five low-skilled full-time human employees, UiPath says.

The firm’s software robots mimic human users. Once installed on a computer and trained to perform certain tasks, they can read screens the way a human does and can perform a broad range of tasks, such as saving email attachments from clients, extracting data from a particular field in a bill, and importing that data into a company’s software, where it can be manipulated by a human employee.

A software robot could be trained to install Office copies on Windows machines, for example. It knows where and when to click next, and to check certain buttons. Of course, it still needs to wait for files to copy during certain steps of the installation process.

One of the unusual approaches that UiPath has adopted is that it offers its software free to companies with a turnover below $1m.

UiPath was founded in Romania in 2012, by former Microsoft software developer Daniel Dines, now CEO, and Marius Tirca, CTO.

It grew from 10 people employed two years ago, to 150 today. About 100 of them are still located in Bucharest, Romania, where the tech team is located. The company has physical offices in New York, London, Bangalore, Tokyo, and Singapore, and plans to set up shop in Hong Kong and Sydney.

UiPath’s turnover is undisclosed but the management says it increased sixfold in 2016, and most of the customers are US and European. CEO Dines said he’s working with two Top 10 Fortune Global companies, among others.


A competitor to Automation Anywhere and Blue Prism, UiPath says it will use the money raised in the series A round led by venture capital firm Accel Partners to expand the business and develop its technologies.

CTO Tirca said his tech team is working on adding more cognitive capabilities to the software, such as natural language processing and machine learning. Work is also going on to improve the way the robots handle unstructured data.

UiPath plans to double the team by the end of this year, tapping into Romania’s vibrant tech talent pool. The salaries it offers are among the highest in the country, but its technical job interviews are among the most difficult. The management wants to recruit the best and brightest, regardless of their experience in the field.

The robotic process automation market is expected to approach $9bn by 2024, according to Grand View Research. It reckons small and mid-size companies will benefit most from automation, as software robots are 65 percent less expensive than full-time employees. Forrester estimates that, by 2021, there will be over four million robots doing office, administrative, sales, and related tasks.




The inventor of Siri says one day AI will be used to upload and access our memories

Artificial intelligence may one day surpass human intelligence. But, if designed right, it may also be used to enhance human cognition.

Tom Gruber, one of the inventors of the artificial intelligence voice interface Siri that now lives inside iPhones and the macOS operating system, shared a new idea at the TED 2017 conference today for using artificial intelligence to augment human memory.

“What if you could have a memory that was as good as computer memory and is about your life?” Gruber asked the audience. “What if you could remember every person you ever met? How to pronounce their name? Their family details? Their favorite sports? The last conversation you had with them?”

Gruber said he thinks that using artificial intelligence to catalog our experiences and to enhance our memory isn’t just a wild idea — it’s inevitable.


And the whole reason Gruber says it’s possible: Data about the media that we consume and the people we talk to is available because we use the internet and our smartphones to mediate our lives.


Privacy is no small consideration here. “We get to chose what is and is not recalled and retained,” said Gruber. “It’s absolutely essential that this be kept very secure.”

Though the idea of digitally storing our memories certainly raises a host of unsettling possibilities, Gruber says that AI memory enhancement could be a life-changing technology for those who suffer from Alzheimer’s or dementia.

The New York Times 2013 DealBook Conference in New York


Gruber isn’t the only one in Silicon Valley thinking of ways to get inside your head. Last week at the annual Facebook developer conference, Mark Zuckerberg shared a project Facebook is working on to build non-invasive sensors that will read brain activity. The sensors are being designed to read the part of your brain that translates thoughts to speech to allow you to type what you’re thinking.

And Elon Musk, CEO of Tesla and SpaceX, has started a new company called Neuralink to build wireless brain-computer interface technology. Musk shared his idea for the technology, which he calls “neural lace,” at Recode’s Code Conference last year.

Watch Musk discuss neural lace and why he thinks it could help humans keep apace with rapid advancements in artificial intelligence.



The smartphone is eventually going to die — this is Mark Zuckerberg’s crazy vision for what comes next

At this week’s Facebook F8 conference in San Jose, Mark Zuckerberg doubled down on his crazy ambitious 10-year plan for the company, first revealed in April 2016.

Basically, Zuckerberg’s uses this roadmap to demonstrate Facebook’s three-stage game plan in action: First, you take the time to develop a neat cutting-edge technology. Then you build a product based on it. Then you turn it into an ecosystem where developers and outside companies can use that technology to build their own businesses.

When Zuckerberg first announced this plan last year, it was big on vision, but short on specifics.

On Facebook’s planet of 2026, the entire world has internet access — with many people likely getting it through Internet.org, Facebook’s connectivity arm. Zuckerberg reiterated this week that the company is working on smart glasses that look like your normal everyday Warby Parkers. And underpinning all of this, Facebook is promising artificial intelligence good enough that we can talk to computers as easily as chatting with humans.


A world without screens

For science-fiction lovers, the world Facebook is starting to build is very cool and insanely ambitious. Instead of smartphones, tablets, TVs, or anything else with a screen, all our computing is projected straight into our eyes as we type with our brains.

A mixed-reality world is exciting for society and for Facebook shareholders. But it also opens the door to some crazy future scenarios, where Facebook, or some other tech company, intermediates everything you see, hear, and, maybe even, think. And as we ponder the implications of that kind of future, consider how fast we’ve already progressed on Zuckerberg’s timeline.

We’re now one year closer to Facebook’s vision for 2026. And things are slowly, but surely, starting to come together, as the social network’s plans for virtual and augmented reality, universal internet connectivity, and artificial intelligence start to slowly move from fantasy into reality.

In fact, Michael Abrash, the chief scientist of Facebook-owned Oculus Research, said this week that we could be just 5 years away from a point where augmented reality glasses become good enough to go mainstream. And Facebook is now developing technology that lets you “type” with your brain, meaning you’d type, point, and click by literally thinking at your smart glasses. Facebook is giving us a glimpse of this with the Camera Effects platform, making your phone into an AR device.

Fries with that?

The potential here is tremendous. Remember that Facebook’s mission is all about sharing, and this kind of virtual, ubiquitous ” teleportation ” and interaction is an immensely powerful means to that end.

This week, Oculus unveiled “Facebook Spaces,” a “social VR” app that lets denizens of virtual reality hang out with each other, even if some people are in the real world and some people have a headset strapped on. It’s slightly creepy, but it’s a sign of the way that Facebook sees you and your friends spending time together in the future. 

And if you’re wearing those glasses, there’s no guarantee that the person who’s taking your McDonald’s order is a human, after all. Imagine a virtual avatar sitting at the cash register, projected straight into your eyeballs, and taking your order. With Facebook announcing its plans to revamp its Messenger platform with AI features that also make it more business-friendly, the virtual fast-food cashier is not such a far-fetched scenario.

Sure, Facebook Messenger chatbots have struggled to gain widespread acceptance since they were introduced a year ago. But as demonstrated with Microsoft’s Xiaoice and even the Tay disaster, we’re inching towards more human-like systems that you can just talk to. And if Facebook’s crazy plan to let you “hear” with your skin plays out, they can talk to you while you’re wearing those glasses. And again, you’ll be able to reply with just a thought.

screenshot 2017-04-20 172747



Supercharge healthcare with artificial intelligence

Pattern-recognition algorithms can transform horses into zebras; winter scenes can become summer; artificial intelligence algorithms can generate art; robot radiologists can analyze your X-rays with remarkable precision.

We have reached the point where pattern-recognition algorithms and artificial intelligence (A.I.) are more accurate than humans at the visual diagnosis and observation of X-rays, stained breast cancer slides and other medical signs involving general correlations between normal and abnormal health patterns.

Before we run off and fire all the doctors, let’s better understand the A.I. landscape and the technology’s broad capabilities. A.I. won’t replace doctors — it will help to empower them and extend their reach, improving patient outcomes.

An evolution of machine learning

The challenge with artificial intelligence is that no single and agreed-upon definition exists. Nils Nilsson defined A.I. as “activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.” But that definition isn’t close to describing how A.I. evolved.

Artificial intelligence began with the Turing Test, proposed in 1950 by Alan Turing, the scientist, cryptanalyst and theoretical biologist. Since then, rapid progress has been made over the last 75 years, advancing A.I. capabilities.

Isaac Asimov proposed the Three Laws of Robotics in 1950. The first A.I. program was coded in 1951. In 1959, MIT began research in the field of artificial intelligence. GM introduced the first robot into its production assembly line in 1961. The 1960s were transformative, with the first machine learning program written and the first demonstration of an A.I. program which understood natural language, and the first chatbot emerged. In the 1970s, the first autonomous vehicle was designed at the Stanford A.I. lab. Healthcare applications for A.I. were first introduced in 1974, along with an expert system for medical diagnostics. The LISP language emerged out of the 1980s, with natural networks integrating with autonomous vehicles. IBM’s famous Deep Blue beat Gary Kasparov at chess in 1997. And by 1999, the world was experimenting with A.I.-based “domesticated” robots.

Innovation was further inspired in 2004 when DARPA hosted the first design competition for autonomous vehicles in the commercial sector. By 2005, big tech companies, including IBM, Microsoft, Google and Facebook, were actively investing in commercial applications, and the first recommendation engines surfaced. The highlight of 2009 was Google’s first self-driving car, some three decades after the first autonomous vehicle was tested at Stanford.

The fascination of narrative science, for A.I. to write reports, was demonstrated in 2010, and IBM Watson was crowned a Jeopardy champion in 2011. Narrative science quickly evolved into personal assistants with the likes of Siri, Google, Now and Cortana. Elon Musk and others launched OpenAI, to discover and enact the path to safe artificial general intelligence in 2015 — to find a friendly A.I. In early 2016, Google’s DeepMind defeated legendary Go player Lee Se-dol in a historic victory.



The Rise Of Artificial Intelligence Passing The Point Of No Return Elon Musk Google And Robots!

How artificial intelligence learns to be racist

Open up the photo app on your phone and search “dog,” and all the pictures you have of dogs will come up. This was no easy feat. Your phone knows what a dog “looks” like.

This and other modern-day marvels are the result of machine learning. These are programs that comb through millions of pieces of data and start making correlations and predictions about the world. The appeal of these programs is immense: These machines can use cold, hard data to make decisions that are sometimes more accurate than a human’s.

But know: Machine learning has a dark side. “Many people think machines are not biased,” Princeton computer scientist Aylin Caliskan says. “But machines are trained on human data. And humans are biased.”

Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does, Caliskan explains: from their creators.


We think artificial intelligence is impartial. Often, it’s not.

Nearly all new consumer technologies use machine learning in some way. Like Google Translate: No person instructed the software to learn how to translate Greek to French and then to English. It combed through countless reams of text and learned on its own. In other cases, machine learning programs make predictions about which résumés are likely to yield successful job candidates, or how a patient will respond to a particular drug.


Machine learning is a program that sifts through billions of data points to solve problems (such as “can you identify the animal in the photo”), but it doesn’t always make clear how it has solved the problem. And it’s increasingly clear these programs can develop biases and stereotypes without us noticing.

Last May, ProPublica published an investigation on a machine learning program that courts use to predict who is likely to commit another crime after being booked systematically. The reporters found that the software rated black people at a higher risk than whites.


“Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation,” ProPublica explained. “They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts … to even more fundamental decisions about defendants’ freedom.”

The program learned about who is most likely to end up in jail from real-world incarceration data. And historically, the real-world criminal justice system has been unfair to black Americans.

This story reveals a deep irony about machine learning. The appeal of these systems is they can make impartial decisions, free of human bias. “If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long,” ProPublica wrote.

But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot.

It’s stories like the ProPublica investigation that led Caliskan to research this problem. As a female computer scientist who was routinely the only woman in her graduate school classes, she’s sensitive to this subject.

Caliskan has seen bias creep into machine learning in often subtle ways — for instance, in Google Translate.

Turkish, one of her native languages, has no gender pronouns. But when she uses Google Translate on Turkish phrases, it “always ends up as ‘he’s a doctor’ in a gendered language.” The Turkish sentence didn’t say whether the doctor was male or female. The computer just assumed if you’re talking about a doctor, it’s a man.

How robots learn implicit bias

Recently, Caliskan and colleagues published a paper in Science, that finds as a computer teaches itself English, it becomes prejudiced against black Americans and women.

Basically, they used a common machine learning program to crawl through the internet, look at 840 billion words, and teach itself the definitions of those words. The program accomplishes this by looking for how often certain words appear in the same sentence. Take the word “bottle.” The computer begins to understand what the word means by noticing it occurs more frequently alongside the word “container,” and also near words that connote liquids like “water” or “milk.”

This idea to teach robots English actually comes from cognitive science and its understanding of how children learn language. How frequently two words appear together is the first clue we get to deciphering their meaning.

Once the computer amassed its vocabulary, Caliskan ran it through a version of the implicit association test.

In humans, the IAT is meant to undercover subtle biases in the brain by seeing how long it takes people to associate words. A person might quickly connect the words “male” and “engineer.” But if a person lags on associating “woman” and “engineer,” it’s a demonstration that those two terms are not closely associated in the mind, implying bias. (There are some reliability issues with the IAT in humans, which you can read about here.)

Here, instead at looking at the lag time, Caliskan looked at how closely the computer thought two terms were related. She found that African-American names in the program were less associated with the word “pleasant” than white names. And female names were more associated with words relating to family than male names. (In a weird way, the IAT might be better suited for use on computer programs than for humans, because humans answer its questions inconsistently, while a computer will yield the same answer every single time.)

Like a child, a computer builds its vocabulary through how often terms appear together. On the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. That’s not because African Americans are unpleasant. It’s because people on the internet say awful things. And it leaves an impression on our young AI.

This is as much as a problem as you think.

The consequences of racist, sexist AI

Increasingly, Caliskan says, job recruiters are relying on machine learning programs to take a first pass at résumés. And if left unchecked, the programs can learn and act upon gender stereotypes in their decision-making.

“Let’s say a man is applying for a nurse position; he might be found less fit for that position if the machine is just making its own decisions,” she says. “And this might be the same for a women applying for a software developer or programmer position. … Almost all of these programs are not open source, and we’re not able to see what’s exactly going on. So we have a big responsibility about trying to uncover if they are being unfair or biased.”

And that will be a challenge in the future. Already AI is making its way into the health care system, helping doctors find the right course of treatment for their patients. (There’s early research on whether it can help predict mental health crises.)

But health data, too, is filled with historical bias. It’s long been known that women get surgery at lower rates than men. (One reason is that women, as primary caregivers, have fewer people to take care of them post-surgery.)

Might AI then recommend surgery at a lower rate for women? It’s something to watch out for.

So are these programs useless?

Inevitably, machine learning programs are going to encounter historical patterns that reflect racial or gender bias. And it can be hard to draw the line between what is bias and what is just a fact about the world.

Machine learning programs will pick up on the fact that most nurses throughout history have been women. They’ll realize most computer programmers are male. “We’re not suggesting you should remove this information,” Caliskan says. It might actually break the software completely.

Caliskan thinks there need to be more safeguards. Humans using these programs need to constantly ask, “Why am I getting these results?” and check the output of these programs for bias. They need to think hard on whether the data they are combing is reflective of historical prejudices. Caliskan admits the best practices of how to combat bias in AI is still being worked out. “It requires a long-term research agenda for computer scientists, ethicist, sociologists, and psychologists,” she says.



Would you trust your life to an ‘autopilot’ robo-doctor?

I am in an aeroplane crossing the Atlantic Ocean as I write this. We took off from Heathrow Airport more than three hours ago. By now, it’s likely the plane’s captain and crew are not physically in control of the aircraft. Something as complex as flying a metal tube packed with more than 300 living souls at 12,000 meters and 900kph is left to a computer and a set of algorithms. The autopilot.

Such a device is badly needed in our hospital wards. Critical patients needing 24/7 intensive care could certainly benefit from data-based approaches that could leverage on state-of-the-art analytics and AI.

For instance, a wise intensive care unit (ICU) nurse once told me: “Don’t get sick, but if you do get sick, don’t do it at night.” Data suggests evenings and weekends are not a good time to fall ill due to an increase of the patient’s risk of death. If our healthcare professionals had their capabilities augmented like the pilot in charge of my plane (who is able to rest now, so he can be 100 per cent focused during the approach and landing), we could not only get sick anytime without an increased risk of dying, we would also improve patient outcomes and decrease overall costs for the healthcare system. Just consider the upcoming shortage of medical professionals in the NHS and in the US, and the fact that medical errors are already the third-highest cause of death in the US (after heart disease and cancer) with 251,000 deaths in 2013.

Many healthcare organisations are working on potential AI applications. Research groups such as the Stanford Vision Lab are devoting efforts to the general use of AI in healthcare, and startups such as Etiometry in Boston and Better Care in Barcelona are focusing on critical care hospital units. Etiometry’s goal is to develop a predictive analytics platform to improve the quality of care in the ICU. Better Care is focusing on a software platform to capture biomedical data around the ICU patient – incorporating medical knowledge and algorithms. This is also an area of focus for companies such as Google, IBM and Qualcomm.

In the ICU, data from a patient is extensive and complex. But AI deals well with complexity. Based on a patient’s data, an AI platform could ensure the most basic mission of the ICU team (“keep the patient alive”): provide descriptive analytics for “what is going on”, predictive analytics for “what’s going to happen” and prescriptive analytics for “what shall you do”.

The first layer with descriptive analytics would help them understand “what is going on” with a specific patient within the context of thousands of other patients with that same condition. Crunching all that data in real time is an example of a skill set that is not yet available to human beings. The second layer would allow them to allocate resources according to “what’s going to happen” and the progressing complications of patients who are fighting for their lives. Finally, as the presence of AI in the ICU becomes the norm, the availability (and quality) of data would allow for the use of prescriptive analytics as a complement to trial and error that is still predominant when managing critical patients.

Of course, hospitals are not ready for this yet. Just consider that, in 2016, most world-leading hospitals still had no internet access in their operating rooms. Furthermore, doctors have historically been reluctant and conservative when it comes to the introduction of new technologies. To some extent, technology companies have also made the mistake of suggesting that AI will replace doctors – and no trade group likes to feel threatened. We cannot expect healthcare professionals to be free from error. It has been their creative thinking and diligent care that has driven our healthcare systems to greater heights. It has also been their human problem-solving that has allowed us to develop novel medical technologies, contributing to increased life expectancy.

The first step should be in complementing our doctors, not replacing them. Just imagine an ICU room with three screens reporting 20 essential parameters in real time – both invasive and non-invasive monitoring – along with data coming in from the labs, imaging tests and the discrete measurements and clinical observations made by healthcare professionals. The potential in this scenario is not just to mimic the doctors, but to perform tasks that no doctor can manage. If we are able to develop systems that enhance their capabilities and allow them to provide their patients better care, we will be in a win-win situation for healthcare professionals, patients and taxpayers.



Elon Musk launches Neuralink, a venture to merge the human brain with AI

SpaceX and Tesla CEO Elon Musk is backing a brain-computer interface venture called Neuralink, according to The Wall Street Journal. The company, which is still in the earliest stages of existence and has no public presence whatsoever, is centered on creating devices that can be implanted in the human brain, with the eventual purpose of helping human beings merge with software and keep pace with advancements in artificial intelligence. These enhancements could improve memory or allow for more direct interfacing with computing devices.

Musk has hinted at the existence of Neuralink a few times over the last six months or so. More recently, Musk told a crowd in Dubai, “Over time I think we will probably see a closer merger of biological intelligence and digital intelligence.” He added that “it’s mostly about the bandwidth, the speed of the connection between your brain and the digital version of yourself, particularly output.” On Twitter, Musk has responded to inquiring fans about his progress on a so-called “neural lace,” which is sci-fi shorthand for a brain-computer interface humans could use to improve themselves.

These types of brain-computer interfaces exist today only in science fiction. In the medical realm, electrode arrays and other implants have been used to help ameliorate the effects of Parkinson’s, epilepsy, and other neurodegenerative diseases. However, very few people on the planet have complex implants placed inside their skulls, while the number of patients with very basic stimulating devices number only in the tens of thousands. This is partly because it is incredibly dangerous and invasive to operate on the human brain, and only those who have exhausted every other medical option choose to undergo such surgery as a last resort.

This has not stopped a surge in Silicon Valley interest from tech industry futurists who are interested in accelerating the advancement of these types of far-off ideas. Kernel, a startup created by Braintree co-founder Bryan Johnson, is also trying to enhance human cognition. With more than $100 million of Johnson’s own money — the entrepreneur sold Braintree to PayPal for around $800 million in 2013 — Kernel and its growing team of neuroscientists and software engineers are working toward reversing the effects of neurodegenerative diseasesand, eventually, making our brains faster and smarter and more wired.

We know if we put a chip in the brain and release electrical signals, that we can ameliorate symptoms of Parkinson’s,” Johnson told The Verge in an interview late last year. (Johnson also confirmed Musk’s involvement with Neuralink.) “This has been done for spinal cord pain, obesity, anorexia… what hasn’t been done is the reading and writing of neural code.” Johnson says Kernel’s goal is to “work with the brain the same way we work with other complex biological systems like biology and genetics.”

Kernel, to its credit, is quite upfront about the years of medical research necessary to better understand the human brain and pioneer new surgery techniques, software methods, and implant devices that could make a consumer brain-computer interface a reality. The Wall Street Journal says Neuralink was founded as a medical research company in California last July, which bolsters the idea that Musk will follow a similar route as Johnson and Kernel.

To be fair, the hurdles involved in developing these devices are immense. Neuroscience researchers say we have very limited understanding about how the neurons in the human brain communicate, and our methods for collecting data on those neurons is rudimentary. Then there’s the idea of people volunteering to have electronics placed inside their heads.


“People are only going to be amenable to the idea [of an implant] if they have a very serious medical condition they might get help with,” Blake Richards, a neuroscientist and assistant professor at the University of Toronto, told The Verge in an interview earlier this year. “Most healthy individuals are uncomfortable with the idea of having a doctor crack open their skull.”

The Fashion House Of Artificial Intelligence

Fashion subscription service Stitch Fix decided to try it last year, and the human-measured results are in: computers are really good designers.

Stitch Fix’s computers identified shirt cuts, patterns, and sleeve styles popular among the company’s subscribers, and mashed them together with some human help to create three brand new shirts. All three sold out.

(I don’t blame them, either. I feel like I’m falling into a stereotype with this, or that I’m officially too predictable, but I’d buy the shirt they came up with. Just look how summery this one is! So classy, so elegant, so cheery, so easy. It’s like an updated-yet-classic-garden-party-that-isn’t-even-stuffy turned into a shirt.)

So Stitch Fix decided to keep it going and design nine more computer-human “hybrid” items this year – this collection including dresses – with a plan to create another couple dozen by the end of the year. That adds up to a grand total of 40-odd original designs, which is comparable to those put out by famous, well-established couture fashion houses in a given season. The items make up less than 1% of the company’s stock, but so far so good.


Hybrid Creativity

The idea of artificial intelligence/human creativity hybrids aren’t original to Stitch Fix, but it’s the first time it’s been applied to fashion. Industries like music, graphic design, industrial design, videogames, and special effects have been using AI-human hybrid creativity for a while. Cars are on the “Eventually” list. But it’ll be a If anything, AI creativity will let human designers stay creative while the computers take care of the customer-demanded products.

while before computer creativity is really autonomous. However, as tech becomes more and more sophisticated, and as we better learn how to teach and train it, it’s capable of more and more. The tech goal of having true AI creativity gets closer and closer every day.

Will this mean that jobs are taken away? Possibly a few. But, more than any other industry, creative ones are both the most adaptive and the largest. There is no limit to creativity, even when AI is introduced as a competitor. It’s one thing to automate a job like sewing, but you can’t automate the imagination. Human artists aren’t going anywhere. If anything, it’ll let human designers stay creative while the computers take care of the customer-demanded clothing items. Why design for the trend-followers when a computer can do that while you focus on setting the trends?

The Artificial Intelligence Fashion Future

Hopefully the next stop will be fashion pieces along the lines of a pipe dream I read about as a kid, published in National Geographic Kids about 10 years ago (I wasn’t kidding when I said I was a kid), that sounded so great I still haven’t forgotten it: someday, we could have fabric that mends itself, is truly stain-proof, and adjusts itself according to the temperature of the air around it to regulate body heat.

I’m all for robots contributing to New York Fashion Week, but I’m more than willing to put that off a few years if it means shortening the time between now and when I can have t-shirts that fix those annoying little holes they get down at the hem (you know what I mean), jeans that keep up with me as I fly from the Deep South to the Far North and back (because I’m insane), and socks that mend themselves when I snip the fabric as I cut off the sales tag (please tell me I’m not the only one).

Until then, I’ll be content with garden party shirts that don’t make me look like a doily, dresses that leave human designers time to exercise their creative power without necessarily pandering to the masses, and watching AI computers dramatically compete with each other for spots in New York Fashion Week. Let the games begin.



Artificial intelligence powers marketing

Marketing is undergoing dramatic change, driven by shifts in technology and the availability of digital data. Among the most significant changes is the heightened ability for marketers to discern what customers and potential buyers care about and then act on that information.

Marketers today are watching as buyers leave digital tracks – the web pages they view, buttons they press on mobile devices, comments they leave on Facebook or Twitter. By observing how consumers act, marketers can learn what buyers care about and what is important to them.

By aggregating this digital data, and applying the right algorithms, marketers can recommend products, deliver interesting offers, and create personalization to segments of one rather than to batches of thousands.

Machine learning is well-suited to this type of data aggregation, analysis, and recommendation. To learn more about the role of artificial intelligence in marketing, I spoke with two experts. Sameer Patel is the CEO of Kahuna Software, and Andrew Eichenbaum is Kahuna’s head of science.

This conversation was episode 209 of the CXOTALK series of discussions with the world’s most important and prolific innovators.

If you care about the future of marketing, AI, and machine learning, then dig into this discussion.


Watch the video embedded above and read an edited transcript below. Or hop over to CXOTALK and check out a complete transcript from the entire 45-minute conversation.

What is Kahuna Software?

Kahuna software is a B2C marketing automation provider. We have built a real-time platform that allows brands to be able to understand the interests and preferences of their consumers. Literally within seconds, and put meaningful offers in front of them. This is the new way of using artificial intelligence to engage with your consumers on the right device at the right time.


We look at convergence and the need for consumer brands to rethink how they engage and transact with the consumers.

We’re in this new era, where you can market to anybody, probably 14-16 hours a day. People are that connected to their cell phone, it’s always there, there are multiple channels to reach out to them, and that’s all through one device. This connectivity has become ubiquitous, at least in the US market, over the past five years.

Now that being said, it’s easy enough to spam them, and nobody wants to do that because people have become hypersensitive to spam. So, it’s not just not sending them spam; it’s knowing what to send them when to send to them, how you send to them because there’s a range of things. What message do you want to send to them? And, it just extends out.

We’re now in an area where we can think about trying to increase the expected long-term value of all my customers. I want to increase their overall engagement stake, and this is what marketers can now reach to. It was a nebulous goal before but is now something we can move forward and try and act on.

Is this just marketing automation?

Marketing automation was created a decade ago. How does stack up? The market’s over ten billion dollars in size, yet there is over two hundred and eighty billion dollars of goods left in abandoned shopping carts every single year. Two hundred eighty billion dollars.

That’s how much you and I go, and we almost buy, and we put it in the shopping cart, and we leave it there. You’re effectively nudging the consumer to the finish line, or providing them with handholding, information and research that might persuade them to finish buying.

The conversion rates are 2-3% on e-commerce. That’s how bad it is. All this investment in what seemed like the right offers lead to 2-3% of conversion.



Artificial Intelligence Is Learning to Predict and Prevent Suicide

For years, Facebook has been investing in artificial intelligence fields like machine learning and deep neural nets to build its core business—selling you things better than anyone else in the world. But earlier this month, the company began turning some of those AI tools to a more noble goal: stopping people from taking their own lives. Admittedly, this isn’t entirely altruistic. Having people broadcast their suicides from Facebook Live isn’t good for the brand.

But it’s not just tech giants like Facebook, Instagram, and China’s up-and-coming video platform Live.me who are devoting R&D to flagging self-harm. Doctors at research hospitals and even the US Department of Veterans Affairs are piloting new, AI-driven suicide-prevention platforms that capture more data than ever before. The goal: build predictive models to tailor interventions earlier. Because preventative medicine is the best medicine, especially when it comes to mental health.

If you’re hearing more about suicide lately, it’s not just because of social media. Suicide rates surged to a 30-year high in 2014, the last year for which the Centers for Disease Control and Prevention has data. Prevention measures have historically focused on reducing people’s access to things like guns and pills, or educating doctors to better recognize the risks. The problem is, for more than 50 years doctors have relied on correlating suicide-risk with depression and drug abuse. And the research says they’re only slightly better at it than a coin flip.

But artificial intelligence offers the possibility to identify suicide-prone people more accurately, creating opportunities to intervene long before thoughts turn to action. A study publishing later this month used machine learning to predict with 80 to 90 percent accuracy whether or not someone will attempt suicide, as far off as two years in the future. Using anonymized electronic health records from 2 million patients in Tennessee, researchers at Florida State University trained algorithms to learn which combination of factors, from pain medication prescriptions to number of ER visits each year, best predicted an attempt on one’s own life.

Their technique is similar to the text mining Facebook is using on its wall posts. The social network already had a system in which users can report posts that suggest a user is at risk of self harm. Using those reports, Facebook trained an algorithm to recognize similar posts, which they’re testing now in the US. Once the algorithm flags a post, Facebook will make the option to report the post for “suicide or self injury” more prominent on the display. In a personal post, Mark Zuckerberg described how the company is integrating the pilot with other suicide prevention measures, like the ability to reach out to someone during a live video stream.

The next step would be to use AI to analyze video, audio, and text comments simultaneously. But that’s a much trickier engineering feat. Researchers have a pretty good handle on the kind of words people use when they’re talking about their own pain and emotional states. But in a live stream, the only text comes from commenters. In terms of the video itself, software engineers have already figured out ways to automatically tell when someone is naked on-screen, so they’re using similar techniques to detect the presence of a gun or knife. Pills would be way harder.

Prediction Before Prevention

Ideally though, you can intervene even earlier. That’s what one company is trying to do, by collecting totally different kinds of data. Cogito, a Darpa-funded, MIT-spinoff company, is currently testing an app that creates a picture of your mental health just by listening to the sound of your voice. Called Companion, the (opt-in) software passively gathers all the things users say in a day, picking up on vocal cues that signal depression and other mood changes. As opposed to the content of their words, Companion analyzes the tone, energy, fluidity of speaking and levels of engagement with a conversation. It also uses your phone’s accelerometer to figure out how active you are, which is a strong indicator for depression.

The VA is currently piloting the platform with a few hundred veterans—a particularly high-risk group. They won’t have results until the end of this year, but so far the app has been able to identify big life changes—like becoming homeless—that significantly increase one’s risk for self-harm. Those are exactly the kinds of shifts that might not be obvious to a primary care provider unless they were self-reported.


David K. Ahern is leading another trial at Brigham and Women’s Hospital in Boston, Massachusetts, where they’re using Companion to monitor patients with known behavioral disorders. So far it’s been rare for the app to signal a safety alert—which would activate doctors and social workers to check in on him or her. But the real benefit has been the stream of information about patients’ shifting moods and behaviors.

Unlike a clinic visit, this kind of monitoring offers more than just a snapshot of someone’s mental state. “Having that kind of rich data is enormously powerful in understanding the nature of a mental health issue,” says Ahern, who heads up the Program of Behavioral Informatics and eHealth at BWH. “We believe in those patterns there may be gold.” In addition to Companion, Ahern is evaluating lots of other types of data streams—like physiological metrics from wearables and the timing and volume of your calls and texts—to build into predictive models and provide tailored interventions.

Think about it. Between all the sensors in your phone, its camera and microphone and messages, that device’s data could tell a lot about you. More so, potentially, than you could see about yourself. To you, maybe it was just a few missed trips to the gym and a few times you didn’t call your mom back and a few times you just stayed in bed. But to a machine finely tuned to your habits and warning signs that gets smarter the more time it spends with your data, that might be a red flag.

That’s a semi-far off future for tomorrow’s personal privacy lawyers to figure out. But as far as today’s news feeds go, pay attention while you scroll, and notice what the algorithms are trying to tell you.



CNN’s ‘Mostly Human’ is a Real-Life ‘Black Mirror’

In the age of “Westworld” and “Black Mirror,” the possibilities for the future of technology seem endless, especially in the real world. In “Mostly Human with Laurie Segall,” Segall explores the humanity in the constantly moving vehicle that is technology. This six-episode docuseries covers everything from falling in love with robots to using chatbots to communicate with deceased loved ones. It even explores the violence that comes out of the seemingly non-physically confrontational medium that is the internet, as fiction begins to mirror reality.

Premiering on CNNgo on March 12, CNN is opening this series up available to stream, free for everyone regardless of whether or not they hold a cable subscription to the network. In an effort to push its streaming platform forward for those without cable, this ad-free experience showcases a new method for CNN in gaining traction on their online platform.

An editor-at-large for CNN Tech and a technology correspondent for CNN Money, Laurie Segall is no stranger to the intricacies on technological innovation. Leaning further into the human angle of technology in “Mostly Human,” the people behind the tech find the spotlight in this series with Segall’s eager exploration into the worlds of these people.



Google uses AI to help diagnose breast cancer

Google, which not along ago was using artificial intelligence to identify cat pictures, has moved onto something bigger — breast cancer.

Google announced Friday that it has achieved state-of-the-art results in using artificial intelligence to identify breast cancer. The findings are a reminder of the rapid advances in artificial intelligence, and its potential to improve global health.

Google used a flavor of artificial intelligence called deep learning to analyze thousands of slides of cancer cells provided by a Dutch university. Deep learning is where computers are taught to recognize patterns in huge data sets. It’s very useful for visual tasks, such as looking at a breast cancer biopsy.

With 230,000 new cases of breast cancer every year in the United States, Google hopes its technology will help pathologists better treat patients. The technology isn’t designed to, or capable of, replacing human doctors.

“What we’ve trained is just a little sliver of software that helps with one part of a very complex series of tasks,” said Lily Peng, the project manager behind Google’s work. “There will hopefully be more and more of these tools that help doctors [who] have to go through an enormous amount of information all the time.”

Peng described to CNNTech how the human and the computer could work together to create better outcomes. Google’s artificial intelligence system excels at being very sensitive to potential cancer. It will flag things a human will miss. But it sometimes will falsely identify something as cancer, whereas a human pathologist is better at saying, “no, this isn’t cancer.”

“Imagine combining these two types of super powers,” Peng said. “The algorithm helps you localize and find these tumors. And the doctor is really good at saying, ‘This is not cancer.’”

For now, Google’s progress is still research mode and remains in the lab. Google isn’t going to become your pathologist’s assistant tomorrow. But Google and many other players are striving toward a future where that becomes a reality.

Jeroen van der Laak, who leads the pathology department at Radboud University Medical Center, believes the first algorithms for cancer will be available within a couple years, and large-scale routine use will occur in about five years. His university provided the slides for Google’s research.

The technology will be especially useful in parts of the world where there’s a shortage of physicians. For patients who don’t have access to a pathologist, an algorithm — even if imperfect — would be a meaningful improvement. Van der Laak highlighted India and China as two underserved areas.


Google uses AI to help diagnose breast cancer

Robots won’t just take our jobs – they’ll make the rich even richer

It may sound strange, but a number of prominent people have been asking this question lately. As fears about the impact of automation grow, calls for a “robot tax” are gaining momentum. Earlier this month, the European parliament considered one for the EU. Benoît Hamon, the French Socialist party presidential candidate who is often described as his country’s Bernie Sanders, has put a robot tax in his platform. Even Bill Gates recently endorsed the idea.

The proposals vary, but they share a common premise. As machines and algorithms get smarter, they’ll replace a widening share of the workforce. A robot tax could raise revenue to retrain those displaced workers, or supply them with a basic income.

The good news is that the robot apocalypse hasn’t arrived just yet. Despite a steady stream of alarming headlines about clever computers gobbling up our jobs, the economic data suggests that automation isn’t happening on a large scale. The bad news is that if it does, it will produce a level of inequality that will make present-day America look like an egalitarian utopia by comparison.

The real threat posed by robots isn’t that they will become evil and kill us all, which is what keeps Elon Musk up at night – it’s that they will amplify economic disparities to such an extreme that life will become, quite literally, unlivable for the vast majority. A robot tax may or may not be a useful policy tool for averting this scenario. But it’s a good starting point for an important conversation. Mass automation presents a serious political problem – one that demands a serious political solution.

Automation isn’t new. In the late 16th century, an English inventor developed a knitting machine known as the stocking frame. By hand, workers averaged 100 stitches per minute; with the stocking frame, they averaged 1,000. This is the basic pattern, repeated through centuries: as technology improves, it reduces the amount of labor required to produce a certain number of goods.

So far, however, this phenomenon hasn’t produced extreme unemployment. That’s because automation can create jobs as well as destroy them. One recent example is bank tellers: ATMs began to appear in the 1970s, but the total number of tellers has actually grown since then. As ATMs made it cheaper to run a branch, banks opened more branches, leading to more tellers overall. The job description has changed –today’s tellers spend more time selling financial services than dispensing cash – but the jobs are still there.

What’s different this time is the possibility that technology will become so sophisticated that there won’t be anything left for humans to do. What if your ATM could not only give you a hundred bucks, but sell you an adjustable-rate mortgage? While the current rhetoric around artificial intelligence is overhyped, there have been meaningful advances over the past several years. And it’s not inconceivable that much bigger breakthroughs are on the horizon. Instead of merely transforming work, technology might begin to eliminate it. Instead of making it possible to create more wealth with less labor, automation might make it possible to create more wealth without labor.



Chat app Line is developing an AI assistant and Amazon Echo-style smart speaker

Messaging app Line is taking a leaf out of the books of Amazon, Google and others after it launched its own artificial intelligence platform.

A voice-powered concierge service called Clova — short for “Cloud Virtual Assistant” — is the centerpiece of the service, much like Amazon’s Alexa, Microsoft’s Cortana and Google Assistant. Beyond the assistant living inside the main Line chat app, the company said it has plans to release hardware with support for Clova baked-in, as Amazon and Google have done, and work with third parties to integrate the service into additional hardware. Sony and toy maker Tomy are among the early partners it is talking to.

Also, interestingly, Line has acquired a majority stake in the Japanese company behind a ‘holographic’ AI service. Vinclu is the startup, and its Gatebox is its ‘virtual robot’ that gives AI a graphical presence in the form of manga cartoon-style female — very Japanese.

“Gatebox’s holographic home assistant is voice activated and uses a variety of sensors to interact with the device’s operator in a realistic and natural manner, while also connecting to a range of devices in the home,” Line said.

As this promotional video shows, Gatebox is painted more like a virtual companion than a gender-neutral AI assistant. That might require a different approach if the product is to ship outside of Japan, perhaps involving Brown the bear and others who star in Line’s sticker packs.


Chat app Line is developing an AI assistant and Amazon Echo-style smart speaker

China May Soon Surpass America on the Artificial Intelligence Battlefield

The rapidity of recent Chinese advances in artificial intelligence indicates that the country is capable of keeping pace with, or perhaps even overtaking, the United States in this critical emerging technology. The successes of major Chinese technology companies, notably Baidu Inc., Alibaba Group and Tencent Holding Ltd.—and even a number of start-ups—have demonstrated the dynamism of these private-sector efforts in artificial intelligence. From speech recognition to self-driving cars, Chinese research is cutting edge. Although the military dimension of China’s progress in artificial intelligence has remained relatively opaque, there is also relevant research occurring in the People’s Liberation Army research institutes and the Chinese defense industry. Evidently, the PLA recognizes the disruptive potential of the varied military applications of artificial intelligence, from unmanned weapons systems to command and control. Looking forward, the PLA anticipates that the advent of artificial intelligence will fundamentally change the character of warfare, ultimately resulting in a transformation from today’s “informationized” (信息化) ways of warfare to future “intelligentized” (智能化) warfare.

The Chinese leadership has prioritized artificial intelligence at the highest levels, recognizing its expansive applications and strategic implications. The initial foundation for China’s progress in artificial intelligence was established through long-term research funded by national science and technology plans, such as the 863 Program. Notably, China’s 13th Five-Year Plan (2016–20) called for breakthroughs in artificial intelligence, which was also highlighted in the 13th Five-Year National Science and Technology Innovation Plan. The new initiatives focus on artificial intelligence and have been characterized as the “China Brain Plan” (中国脑计划), which seeks to enhance understandings of human and artificial intelligence alike. In addition, the Internet Plus and Artificial Intelligence, a three-year implementation plan for artificial intelligence (2016–18), emphasizes the development of artificial intelligence and its expansive applications, including in unmanned systems, in cyber security and for social governance. Beyond these current initiatives, the Chinese Academy of Engineering has proposed an “Artificial Intelligence 2.0 Plan,” and the Ministry of Science and Technology of the People’s Republic of China has reportedly tasked a team of experts to draft a plan for the development of artificial intelligence through 2030. The apparent intensity of this support and funding will likely enable continued, rapid advances in artificial intelligence with dual-use applications.


China’s significant progress in artificial intelligence must be contextualized by the national strategy of civil-military integration or “military-civil fusion” (军民融合) that has become a high-level priority under President Xi Jinping’s leadership. Consequently, it is not unlikely that nominally civilian technological capabilities will eventually be utilized in a military context. For instance, An Weiping (安卫平), deputy chief of staff of the PLA’s Northern Theater Command, has highlighted the importance of deepening civil-military integration, especially for such “strategic frontier technologies” as artificial intelligence. Given this strategic approach, the boundaries between civilian and military research and development tend to blur. In a notable case, Li Deyi (李德毅) acts as the director of the Chinese Association for Artificial Intelligence, and he is affiliated with Tsinghua University and the Chinese Academy of Engineering. Concurrently, Li Deyi is a major general in the PLA who serves as deputy director of the Sixty-First Research Institute, under the aegis of the Central Military Commission (CMC) Equipment Development Department.



Artificial intelligence ‘will save wearables’!

When a technology hype flops, do you think the industry can use it as a learning experience? A time of self-examination? An opportunity to pause and reflect on making the next consumer or business tech hype a bit less stupid?

Don’t be silly.

What it does is pile the next hype on to the last hype, and call it “Hype 2.0”.

“With AI integration in wearables, we are entering ‘wearable 2.0’ era,” proclaim analysts Counterpoint Research in one of the most optimistic press releases we’ve seen in a while.

It’s certainly bullish for market growth, predicting that “AI-powered wearables will grow 376 per cent annually in 2017 to reach 60 million units.”

In fact it’s got a new name for these – “hearables”. Apple will apparently have 78 per cent of this hearable market.

The justification for the claim is that language-processing assistants like Alexa will be integrated into more products. Counterpoint also includes Apple Airpods and Beats headphones as “AI-powered hearables”, which may be stretching things a little.

It almost seems rude to point out that the current wearables market – a bloodbath for vendors – is already largely “hearable”. Android Wear has been obeying OK Google commands spoken by users since it launched in 2014:

If a “smart” natural language interface had the potential to make wearables sell, surely we would know it by now. But we hardly need to tell you what sales of these devices are. Many vendors have hit paused, or canned their efforts completely. You could even argue that talking into a wearable may be one of the reasons why the wearable failed to be a compelling or successful consumer electronics story. People don’t want to do it.

Sprinkling the latest buzzword – machine learning or AI – over something that isn’t a success doesn’t suddenly make that thing a success. But AI has always had a cult-like quality to it: it’s magic, and fills a God-shaped hole. For 50 years, the divine promise of “intelligent machines” has periodically overcome people’s natural scepticism as they imagine a breakthrough is close at hand. Then it recedes into the labs again. All that won’t stop people wishing that this time AI has Lazarus-like powers.

We can’t wait for our mac



Now Anyone Can Deploy Google’s Troll-Fighting AI

LAST SEPTEMBER, A Google offshoot called Jigsaw declared war on trolls, launching a project to defeat online harassment using machine learning. Now, the team is opening up that troll-fighting system to the world.

On Thursday, Jigsaw and its partners on Google’s Counter Abuse Technology Team released a new piece of code called Perspective, an API that gives any developer access to the anti-harassment tools that Jigsaw has worked on for over a year. Part of the team’s broader Conversation AI initiative, Perspective uses machine learning to automatically detect insults, harassment, and abusive speech online. Enter a sentence into its interface, and Jigsaw says its AI can immediately spit out an assessment of the phrase’s “toxicity” more accurately than any keyword blacklist, and faster than any human moderator.

The Perspective release brings Conversation AI a step closer to its goal of helping to foster troll-free discussion online, and filtering out the abusive comments that silence vulnerable voices—or, as the project’s critics have less generously put it, to sanitize public discussions based on algorithmic decisions.

An Internet Antitoxin

Conversation AI has always been an open source project. But by opening up that system further with an API, Jigsaw and Google can offer developers the ability to tap into that machine-learning-trained speech toxicity detector running on Google’s servers, whether for identifying harassment and abuse on social media or more efficiently filtering invective from the comments on a news website.

“We hope this is a moment where Conversation AI goes from being ‘this is interesting’ to a place where everyone can start engaging and leveraging these models to improve discussion,” says Conversation AI product manager CJ Adams. For anyone trying to rein in the comments on a news site or social media, Adams says, “the options have been upvotes, downvotes, turning off comments altogether or manually moderating. This gives them a new option: Take a bunch of collective intelligence—that will keep getting better over time—about what toxic comments people have said would make them leave, and use that information to help your community’s discussions.”

On a demonstration website launched today, Conversation AI will now let anyone type a phrase into Perspective’s interface to instantaneously see how it rates on the “toxicity” scale. Google and Jigsaw developed that measurement tool by taking millions of comments from Wikipedia editorial discussions, the New York Times and other unnamed partners—five times as much data, Jigsaw says, as when it debuted Conversation AI in September—and then showing every one of those comments to panels of ten people Jigsaw recruited online to state whether they found the comment “toxic.”

The resulting judgements gave Jigsaw and Google a massive set of training examples with which to teach their machine learning model, just as human children are largely taught by example what constitutes abusive language or harassment in the offline world. Type “you are not a nice person” into its text field, and Perspective will tell you it has an 8 percent similarity to phrases people consider “toxic.” Write “you are a nasty woman,” by contrast, and Perspective will rate it 92 percent toxic, and “you are a bad hombre” gets a 78 percent rating. If one of its ratings seems wrong, the interface offers an option to report a correction, too, which will eventually be used to retrain the machine learning model.

The Perspective API will allow developers to access that test with automated code, providing answers quickly enough that publishers can integrate it into their website to show toxicity ratings to commenters even as they’re typing. And Jigsaw has already partnered with online communities and publishers to implement that toxicity measurement system. Wikipedia used it to perform a study of its editorial discussion pages. The New York Times is planning to use it as a first pass of all its comments, automatically flagging abusive ones for its team of human moderators. And the Guardian and the Economist are now both experimenting with the system to see how they might use it to improve their comment sections, too. “Ultimately we want the AI to surface the toxic stuff to us faster,” says Denise Law, the Economist’s community editor. “If we can remove that, what we’d have left is all the really nice comments. We’d create a safe space where everyone can have intelligent debates.”



Artificial intelligence set to transform the patient experience

ORLANDO – From Watson to Siri, Alexa to Cortana, consumers and patients have become much more familiar with artificial intelligence and natural language processing in recent years. Pick your terminology: machine learning, cognitive computing, neural networks/deep learning. All are becoming more commonplace – in our smartphones, in our kitchens – and as they continue to evolve at a rapid pace, expectations are high for how they’ll impact healthcare.

Skepticism is, too. And even fear.

As it sparks equal part doubt and hope (and not a little hype) from patients, physicians and technologists, a panel of IT experts at HIMSS17 discussed the future of AI in healthcare on Sunday afternoon.

Kenneth Kleinberg, managing director at The Advisory Board Company, spoke with execs from two medical AI startups: Cory Kidd, CEO of Catalia Health, and Jay Parkinson, MD, founder and CMO of Sherpaa.

Catalia developed a small robot, the Mabu Personal Healthcare Companion, aimed at assisting with “long-term patient engagement.” It’s able to have tailored conversations with patients that can evolve over time as the platform – developed using principles of behavioral psychology – gains daily data about treatment plans, health challenges and outcomes.

Sherpaa is billed as an “on-demand doctor practice” that connects subscribers with physicians, via its app, who can make diagnoses, order lab tests and imaging and prescribe medications at locations near the patient. “Seventy percent of time, the doctors have a diagnosis,” said Parkinson. “Most cases can be solved virtually.” Rather than just a virtual care, platform, it enables “care coordination with local clinicians in the community,” he said.

In this fast-changing environment, there are many questions to ask: “We’re starting to see these AI systems appear in other parts of our lives,” said Kleinberg. “How valuable are they? How capable are they? What kind of authority will these systems attain?”

And also: “What does it mean to be a physician and patient in this new age?”

Kidd said he’s a “big believer – when it’s used right.”

Parkinson agreed: “It has to be targeted to be successful.”

Another important question: For all the hype and enthusiasm about AI, “where on the inflection curve are we?” asked Kleinberg. “Is it going to take off and get a lot better? And does it offer more benefits at the patient engagement level? Or as an assistant to clinicians?”

For Kidd, it’s clearly the former, as Catalia’s technology deploys AI to help patients manage their own chronic conditions.

“The kinds of algorithms we’re developing, we’re building up psychological models of patients with every encounter,” he explained. “We start with two types of psychologies: The psychology of relationships – how people develop relationships over time – as well as the psychology of  behavior change: How do we chose the right technique to use with this person right now?”

The platform also gets “smarter” as it become more attuned to “what we call our biographical model, which is kind of a catch-all for everything else we learn in conversation,” he said. “This man has a couple cats, this woman’s son calls her every Sunday afternoon, whatever it might be that we’ll use later in conversations.”

Consumer applications driving clinical innovations
AI is fast advancing in healthcare in large part because it’s evolving so quickly in the consumer space. Take Apple’s Siri, for instance: “The more you talk to it, the better it makes our product,” said Kidd. “Literally. We’re licensing the same voice recognition and voice outlet technology thats running on your iPhone right now.”

For his part, Parkinson sees problems with simply adding AI technology onto the doctor-patient relationship as it currently exists. Most healthcare encounters involve “an oral conversation between doctor and patient,” he said, where “retention is 15 percent or less.”

For AI to truly be an effective augmentation of clinical practices, that conversation “needs to be less oral and more text-driven,” he said. “I’m worried about layering AI on a broken delivery process.”

But machine learning is starting to change the came in areas large and small throughout healthcare. Kleinberg pointed to the area of imaging recognition. IBM, for instance, made headlines when it acquired Merge Healthcare for $1 billion in 2015, allowing Watson to “see” medical images – the largest data source in healthcare.

Then there are the various iPhone apps that say they can help diagnose skin cancer with photos users take of their own moles. Kleinberg said he mentioned the apps to a dermatologist friend of his.

“I want to quote him very carefully: He said, ‘Naaaaahhhhhh.'”

But Parkinson took a different view: “About 25 percent of our cases have photos attached,” he said. “Right now, if it’s a weird mole we’re sending people out to see a dermatologist. But I would totally love to replace that (doctor) with a robot. And I don’t think that’s too far off.”

In the near term, however, “you would be amazed at the image quality that people taking photographs think are good photographs,” he said. “So there’s a lot of education for the patient about how to take a picture.”

The patient’s view
If artificial intelligence is having promising if controversial impact so far on the clinical side, one of the most important aspects of this evolution also still has some questions to answer. Most notably: What do the patient think?

One one hand, Kleinberg pointed to AI pilots where patients paired with humanoid robots “felt a sense of loss” after the test ended. “One woman followed the robot out and waved goodbye to it.”

On the other, “some people are horrified that we would be letting machines play a part in a role that should be played by humans,” he said.

The big question, then: “Do we have place now for society and a system such as this?” he asked.

“The first time I put something like this in a patient’s home was 10 years ago now,” said Kidd.  “We’ve seen, with the various versions of AI and robots, that people can develop an attachment to them. At the same time, typical conversation is two or three minutes. It’s not like people spend all day talking with these.”

It’s essential, he argued, to be up front with patients about just what the technology can and should do.

“How you introduce this, and how you couch the terminology around this technology and what it can and can’t do is actually very important in making it effective for patients,” said Kidd. “We don’t try to convince anyone that this is a doctor or a nurse. As long as we set up the relationship in the right way so people understand how it works and what it can do, it can be very effective.

“There is this cultural conception that AI and robotics can be scary,” he conceded. “But what I’ve seen, putting this in front of patients is that this is a tool that can do something and be very effective, and people like it a lot.”



RealDoll Creating Artificial Intelligence System, Robotic Sex Dolls

Sex doll manufacturer RealDoll is creating an AI system which will allow users to customize their sex doll’s personality and create a relationship with it over time.

Harmony AI, which is set to be released on April 15, will be a smartphone app and is reported to feature a range of traits for customers to choose for their sex dolls, while the dolls will also be able to learn about their owners and respond in different ways accordingly.

“We are developing the Harmony AI system to add a new layer to the relationships people can have with a RealDoll,” said CEO Matt McMullen to Digital Trends. “Many of our clients rely on their imaginations to a great degree to impose imagined personalities on their dolls. With the Harmony AI, they will be able to actually create these personalities instead of having to imagine them.”

“They will be able to talk to their dolls, and the AI will learn about them over time through these interactions, thus creating an alternative form of relationship,” he continued. “The scope of conversations possible with the AI is quite diverse, and not limited to sexual subject matter. We feel that this system, and this technology, will appeal to a segment of the population that struggles with forming intimate connections with other people, whether by choice or circumstance. Furthermore, it will likely attract those who seek to explore uncharted and new territory where relationships and sex are concerned.”


Harmony AI will be the first product in a range of next-generation technologies coming from RealDoll over the next few years.

Other planned releases include “robotic head systems,” which are set to be released by the end of the year, followed by a “virtual reality platform” in 2018.


RealDoll isn’t the first company to recognize the potential connection between sex and AI. “This happens because people are lonely and bored… It is a symptom of our society,” said Robin Labs chief executive Ilya Eckstein, who claims that his company’s virtual assistant “Robin” is used by “teenagers and truckers without girlfriends” for up to 300 conversations a day.

“As well as the people who want to talk dirty, there are men who want a deeper sort of relationship or companionship,” he continued, adding that some people wanted to talk “for no particular reason” and were just lonely or bored.

In an interview with Breitbart Tech last year, Futurologist Dr. Ian Pearson also predicted that sex with robots would be “fully emotional” in the future, adding that people will eventually spend “about the same as they do today on a decent family-size car.”

“Artificial intelligence is reaching human levels and also becoming emotional as well,” claimed Dr. Pearson. “So people will actually have quite strong emotional relationships with their own robots. In many cases that will develop into a sexual one because they’ll already think that the appearance of the robot matches their preference anyway, so if it looks nice and it has a superb personality too it’s inevitable that people will form very strong emotional bonds with their robots and in many cases that will lead to sex.”



Microsoft CEO says artificial intelligence is the ‘ultimate breakthrough’

Microsoft CEO Satya Nadella kicked off a three-day visit to India in the nation’s startup hub, Bengaluru, by speaking on a number of topics close to his heart, meeting the startup community and announcing a critical partnership with Flipkart.

“First one is cloud and AI, and they kind of go together,” said Satya Nadella. The fact that you have the power of the Internet and infrastructure to read large amounts of data is what powers artificial intelligence. And India is well on its way to rolling out technologies and services powered through the cloud and magic of AI, according to Satya Nadella.

The second thing that’s exciting, according to Nadella, is the power of Cortana, which is like the third runtime when it comes to humans’ interaction with computers is concerned. Natural language processing and the ability to converse with a computer, phone, gadget or service — powered through Microsoft Azure Cloud — is the ultimate promise of Cortana. And it’s also being rolled out around several products and partnerships (Microsoft & Tata Motors, for instance).

And thirdly, just like Tim Cook spoke about this last week, Satya Nadella also believes that Augmented Reality holds a revolutionary potential in its applications through breakthrough technology — like the Microsoft HoloLens. He spoke about just how excited he was the very first time he tried on the Microsoft HoloLens, trying to go through a virtual anatomy class, something that blew his mind away in terms of the limitless potential of AR and VR technology.

On Entrepreneurship

Three years after being appointed as Microsoft CEO in February 2014, Satya Nadella — who’s an alumnus of Manipal Institute of Technology batch of 1988 — also had discussions on digital transformation in India, the need for intelligent cloud, and other topics to ensure speedy digitization of the nation. In terms of startups, Satya Nadella said there was a lot of potential for entrepreneurs to achieve great things by solving unique problems in India through the power of technology, reaffirming Microsoft’s commitment to building the relevant tools and technology platform that enables and empowers others to build great technologies on top of them.

Microsoft & Flipkart strategic partnership

Flipkart will adopt Microsoft Azure as its exclusive public cloud platform, the partnership was announced by Microsoft CEO Satya Nadella and Flipkart Group CEO Binny Bansal at an event in Bangalore.



%d bloggers like this: