Tag Archives: artificial intelligence

AI will obliterate half of all jobs, starting with white collar, says ex-Google China president

The upcoming worldwide workforce reckoning that artificial intelligence is expected to bring will happen much sooner than many experts predict, the former president of Google China told CNBC on Monday.

Kai-Fu Lee, now chairman and CEO of Sinovation Ventures, believes that about half of all jobs will disappear over the next decade and be replaced with AI and the next generation of robots in the fastest period of disruption in history.

“AI, at the same time, will be a replacement for blue collar and white collar jobs,” said Lee, a renowned Chinese technologist and investor who held positions at Apple and Microsoft in addition to Alphabet’s Google. But white collar jobs will go first, he warned.

 

“The white collar jobs are easier to take because they’re pure a quantitative analytical process. Reporters, traders, telemarketing, telesales, customer service, [and] analysts, there can all be replaced by a software,” he explained on “Squawk Box.” “To do blue collar, some of work requires hand-eye coordination, things that machines are not yet good enough to do.”

“The white collar jobs are easier to take because they’re pure a quantitative analytical process”-Kai-Fu Lee, Chairman and CEO of Sinovation Ventures

Lee knocked down an argument that the jobs lost will create new ones to service and program AI and robots. “Robots are clearly replacing people jobs. They’re working 24 by 7. They are more efficient. They need some programming. But one programer can program 10,000 robots.”

Besides taking jobs beyond factory floors, robots and AI are already starting to takeover some of the mundane tasks around people’s homes. Lee pointed to the Amazon Echo as an example.

“The robots don’t have to be anthropomorphized. They can just be an appliance,” he said. “The car that has autonomous driving is not going to have a humanoid person [driving]. It’s just going to be without a steering wheel.”

Lee said that while economic growth “will go dramatically up because AI can do so many things so much more faster” than humans, it’ll force everyone to rethink the practical and social impact of fewer jobs. “If a lot of people will find happiness without working, that would be a happy outcome.”

But in a Washington Post op-ed last month, Lee argued against universal basic income, the idea of governments providing a steady stipend to help each citizen make ends meet regardless of need, employment status, or skill level. UBI is being bandied about as a possible solution to an economy that won’t have nearly enough jobs for working-age adults.

“The optimists naively assume that UBI will be a catalyst for people to reinvent themselves professionally,” he wrote. It may work among Silicon Valley and other highly motivated entrepreneurs, he added, “but this most surely will not happen for the masses of displaced workers with obsolete skills, living in regions where job loss is exacerbated by traditional economic downturn.”

Lee sees a different plan of action. “Instead of just redistributing cash and hoping for the best … we need to retrain and adapt so that everyone can find a suitable profession.”

Some of the solutions he offered in his commentary include developing more jobs that require social skills such as social workers, therapists, teachers, and life coaches as well as encouraging people to volunteer and considering paying them.

Lee wrote, “We need to redefine the idea of work ethic for the new workforce paradigm. The importance of a job should not be solely dependent on its economic value but should also be measured by what it adds to society.”

“We should also reassess our notion that longer work hours are the best way to achieve success,” he concluded.

Source:

https://www.cnbc.com/2017/11/13/ex-google-china-president-a-i-to-obliterate-white-collar-jobs-first.html

Advertisements

The movie ‘Hidden Figures’ can teach us how to keep jobs in an AI future

This weekend, I finally watched “Hidden Figures.” I took my 9-year-old daughter with me to witness how instrumental women of color were to the success of several NASA missions — something that historically has been associated with white male achievement. If you have not seen it yet, I highly recommend it. The acting is superb, and the story offers so much education, both on race relations and women in the workplace.

What I want to focus on is possibly something the director and the cast never imagined could matter. I do — not because it is the most important aspect but simply because it is very relevant to the tech transition we are experiencing right now.

 

All the talk surrounding artificial intelligence is as much about the technology itself as it is about the impact its adoption will have on different aspects of our lives. Business models in the automotive industry, insurance business, public transportation, search and advertising, as well as more personal consequences, such as human-to-human interaction, sources of knowledge and education. Change will not come overnight, but we had better be prepared, because it will come.

New tech requires new skills

Change came in 1962 for the segregated West Area Computer Division of Langley Research Center in Virginia, where the three women who are the main protagonists of the story worked. Mathematician Katherine Goble and de facto supervisor Dorothy Vaughan are both directly affected by new tech rolling into the facility in the form of the IBM 7090.

If you are not familiar with the IBM 7090 (I was not before this weekend), it was the third member of the IBM 700/7000 series of computers designed for large-scale scientific and technological applications. In layman’s terms, the 7090 would be able to perform in a blink of an eye all the calculations that took the computer division hours. Dorothy understood the threat and, armed with her wit and a book on programming languages, was able to help program the IBM 7090, taught her team to do the same, shifted their skills and saved their jobs.

I realize that part of this story might be for the benefit of the screenplay, and that the world is much more complicated. However, I do think that what is at the core is very relevant: The creation of new skill sets.

Although AI has the potential to affect not only manual jobs that can be automated but also, theoretically, jobs that require learning and decision making, the immediate threat is certainly on the former.

We focus a lot, and rightly so, on the job loss AI will cause, but we have not yet started to focus on teaching new skills so such losses can be limited. As I said, AI will not magically appear overnight, but we would be fools to think we have plenty of time to create the skills our “augmented” world will require, from new programming languages to new branches of law and insurance, Q&A testing and more. Empowering people with new skills will be key not only to having a job but also to keeping our income at pace with the higher cost these new worlds will entail. Providing a framework for education is a political responsibility as well as a corporate one.

Who will we trust?

The IBM 7090 replaces Katherine when it comes to checking calculations, but just as Friendship 7 is ready to launch, some discrepancies arise in the electronic calculations for the capsule’s recovery coordinates. Astronaut John Glenn asks the director of the Space Task Group to have Katherine recheck the numbers. When Katherine confirms the coordinates, Glenn thanks the director saying: “You know, you cannot trust something you cannot look in the eyes.”

Source:

https://www.recode.net/2017/1/18/14312464/hidden-figures-movie-artificial-intelligence-ai-consumer-trust

An AI detected colorectal cancer with 86 percent accuracy

A new computer-aided endoscopic system that probes for signs of tumor or cancer growth in the colon may very well be the future of cancer detection. Assisted by artificial intelligence (AI), the new diagnostic system is able to tell if clumps of cells, called colorectal polyps, that grow along the walls of the colon are benign tumors known as colorectal adenoma.

The computer-assisted diagnostic system was trained using more than 30,000 images of colorectal polyps, each magnified 500-times, and operates using machine learning. The AI can check approximately 300 features of the polyp, which it compares to its existing “knowledge”, in less than a second. After having been used successful in preliminary studies, prospective trials followed. The results of these trials, the first for AI-assisted endoscopy in a clinical setting, were presented at the 25th UEG Week in Barcelona, Spain.

The prospective study was conducted by a team led by Dr. Yuichi Mori from Showa University in Yokohama, Japan. Mori and his colleagues tested the new system in 250 patients previously identified to have colorectal polyps. The AI predicted the pathology of each polyp, comparing it with final pathological reports taken from the resected specimens. The results were highly encouraging — the system assessed 306 polyps in real-time, with a 94 percent sensitivity, 79 percent specificity, and 86 percent accuracy. In identifying abnormal tissue growth, the system demonstrated 79 percent positive and 93 percent negative predictive values.

AI IN HEALTHCARE

In short, the AI was able to fairly accurately identify which abnormal colon cell growths were most likely to be cancerous. “The most remarkable breakthrough with this system is that artificial intelligence enables real-time optical biopsy of colorectal polyps during colonoscopy, regardless of the endoscopists’ skill,” Mori said, speaking during the Opening Plenary at the UEG Week. “This allows the complete resection of adenomatous polyps and prevents unnecessary polypectomy of non-neoplastic polyps.”

Furthermore, the researchers presented the results of their prospective study to prove that their system was ready for clinical trials.”We believe these results are acceptable for clinical application and our immediate goal is to obtain regulatory approval for the diagnostic system,” Mori added.

While this may be the first AI-enabled, real-time biopsy, as Mori described it, it’s not the first time AI has been used to improve medical diagnosis and overall medical research. For example, there is an AI effective in identifying skin cancer, and chipmaker NVIDIA is working on a moonshot project to accelerate cancer research using deep learning. Also working in cancer diagnosis is IBM’s Watson, which in some cases has proven to be 99 percent accurate in recommending the same treatments as doctors. Improved cancer detection can spell the difference between a treatment that works and one that doesn’t, so these advancements are potentially life-saving.

Moving forward, Mori’s team plans to conduct a multicenter study to aid eventual clinical tests. They’re also working on an automatic polyp detection system. “Precise on-site identification of adenomas during colonoscopy contributes to the complete resection of neoplastic lesions” said Dr Mori. “This is thought to decrease the risk of colorectal cancer and, ultimately, cancer-related death.”

Source:

https://futurism.com/ai-assisted-detection-identifies-colon-cancer-automatically-and-in-real-time/

Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent

Silicon Valley’s start-ups have always had a recruiting advantage over the industry’s giants: Take a chance on us and we’ll give you an ownership stake that could make you rich if the company is successful.

Now the tech industry’s race to embrace artificial intelligence may render that advantage moot — at least for the few prospective employees who know a lot about A.I.

Tech’s biggest companies are placing huge bets on artificial intelligence, banking on things ranging from face-scanning smartphones and conversational coffee-table gadgets to computerized health care and autonomous vehicles. As they chase this future, they are doling out salaries that are startling even in an industry that has never been shy about lavishing a fortune on its top talent.

Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.

Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete.

At the top end are executives with experience managing A.I. projects. In a court filing this year, Google revealed that one of the leaders of its self-driving-car division, Anthony Levandowski, a longtime employee who started with Google in 2007, took home over $120 million in incentives before joining Uber last year through the acquisition of a start-up he had co-founded that drew the two companies into a court fight over intellectual property.

Salaries are spiraling so fast that some joke the tech industry needs a National Football League-style salary cap on A.I. specialists. “That would make things easier,” said Christopher Fernandez, one of Microsoft’s hiring managers. “A lot easier.”

There are a few catalysts for the huge salaries. The auto industry is competing with Silicon Valley for the same experts who can help build self-driving cars. Giant tech companies like Facebook and Google also have plenty of money to throw around and problems that they think A.I. can help solve, like building digital assistants for smartphones and home gadgets and spotting offensive content.

Most of all, there is a shortage of talent, and the big companies are trying to land as much of it as they can. Solving tough A.I. problems is not like building the flavor-of-the-month smartphone app. In the entire world, fewer than 10,000 people have the skills necessary to tackle serious artificial intelligence research, according to Element AI, an independent lab in Montreal.

“What we’re seeing is not necessarily good for society, but it is rational behavior by these companies,” said Andrew Moore, the dean of computer science at Carnegie Mellon University, who previously worked at Google. “They are anxious to ensure that they’ve got this small cohort of people” who can work on this technology.

Costs at an A.I. lab called DeepMind, acquired by Google for a reported $650 million in 2014, when it employed about 50 people, illustrate the issue. Last year, according to the company’s recently released annual financial accounts in Britain, the lab’s “staff costs” as it expanded to 400 employees totaled $138 million. That comes out to $345,000 an employee.

“It is hard to compete with that, especially if you are one of the smaller companies,” said Jessica Cataneo, an executive recruiter at the tech recruiting firm CyberCoders.

Source:

IBM and MIT pen 10-year, $240M AI research partnership

IBM and MIT came together today to sign a 10-year, $240 million partnership agreement that establishes the MIT-IBM Watson AI Lab at the prestigious Cambridge, MA academic institution.

The lab will be co-chaired by Dario Gil, IBM Research VP of AI and Anantha P. Chandrakasan, dean of MIT’s School of Engineering.

Big Blue intends to invest $240 million into the lab where IBM researchers and MIT students and faculty will work side by side to conduct advanced AI research. As to what happens to the IP that the partnership produces, the sides were a bit murky about that.

This much we know: MIT plans to publish papers related to the research, while the two parties plan to open source a good part of the code. Some of the IP will end up inside IBM products and services. MIT hopes to generate some AI-based startups as part of the deal too.

“The core mission of joint lab is to bring together MIT scientists and IBM [researchers] to shape the future of AI and push the frontiers of science,” IBM’s Gil told TechCrunch.

To that end, the two parties plan to put out requests to IBM scientists and the MIT student community to submit ideas for joint research. To narrow the focus of what could be a broad endeavor, they have established a number of principles to guide the research.

 

This includes developing AI algorithms with goal of getting beyond specific applications for neural-based deep learning networks and finding more generalized ways to solve complex problems in the enterprise.

Secondly, they hope to harness the power of machine learning with quantum computing, an area that IBM is working hard to develop right now. There is tremendous potential for AI to drive the development of quantum computing and conversely for quantum computing and the computing power it brings to drive the development of AI.

With IBM’s Watson Security and Healthcare divisions located right down the street from MIT in Kendall Square, the two parties have agreed to concentrate on these two industry verticals in their work. Finally, the two teams plan to work together to help understand the social and economic impact of AI in society, which as we have seen has already proven to be considerable.

While this is a big deal for both MIT and IBM, Chandrakasan made clear that the lab is but one piece of a broader campus-wide AI initiative. Still, the two sides hope the new partnership will eventually yield a number of research and commercial breakthroughs that will lead to new businesses both inside IBM and in the Massachusetts startup community, particularly in the healthcare and cybersecurity areas.

Source:

IBM and MIT pen 10-year, $240M AI research partnership

Elon Musk Predicts The Cause Of World War III

Elon Musk has a prediction about the cause of World War III, and it’s not President Donald Trump and may not even involve humans at all.  

The head of Tesla and SpaceX on Monday shared a link on Twitter to a report about Russian President Vladimir Putin discussing artificial intelligence:

“Artificial intelligence is the future, not only for Russia, but for all humankind,” Putin was quoted as saying. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

 

Musk added:

By comparison, Musk said, the saber-rattling from North Korea wasn’t much to worry about. 

One Twitter follower suggested that private companies, rather than governments, were far better at artificial intelligence. 

Musk replied:

He also apologized for the glum tweets, saying he was depressing himself, and promised: “Fun, exciting tweets coming soon!”

Source:

http://www.huffingtonpost.com/entry/elon-musk-world-war-iii_us_59ae3d24e4b0354e440c02a6

Putin says the country that perfects AI will be ‘ruler of the world’

Forget the arms race or space race — the new battle for technological dominance revolves around AI, according to Vladimir Putin. The Russian President told students at a career guidance forum that the “future belongs to artificial intelligence,” and whoever is first to dominate this category will be the “ruler of the world.” In other words, Russia fully intends to be a front runner in the AI space. It won’t necessarily hog its technology, though.

 

Putin maintains that he doesn’t want to see anyone “monopolize” the field, and that Russia would share its knowledge with the “entire world” in the same way it shares its nuclear tech. We’d take this claim with a grain of salt (we wouldn’t be surprised if Russia held security-related AI secrets close to the vest), but this does suggest that the country might share some of what it learns.

Not that this reassuring long-term AI skeptic Elon Musk. The entrepreneur believes that the national-level competition to lead AI will be the “most likely cause of WW3.” And it won’t even necessarily be the fault of overzealous leaders. Musk speculates that an AI could launch a preemptive strike if it decides that attacking first is the “most probable path to victory.” Hyperbolic? Maybe (you wouldn’t be the first to make that claim). It assumes that countries will put AI in charge of high-level decision making, Skynet-style, and that they might be willing to go to war over algorithms. Still, Putin’s remarks suggest that his concern has at least some grounding in reality — national pride is clearly at stake.

Source:

https://www.engadget.com/2017/09/04/putin-says-ai-leader-will-rule-the-world

Facebook AI learns human reactions after watching hours of Skype

There’s something not quite right about humanoid robots. They are cute up to a point, but once they become a bit too realistic, they often start to creep us out – a foible called the uncanny valley. Now Facebook wants robots to climb their way out of it.

Researchers at Facebook’s AI lab have developed an expressive bot, an animation controlled by an artificially intelligent algorithm. The algorithm was trained on hundreds of videos of Skype conversations, so that it could learn and then mimic how humans adjust their expressions in response to each other. In tests, it successfully passed as human-like.

To optimize its learning, the algorithm divided the human face into 68 key points that it monitored throughout each Skype conversation. People naturally produce nods, blinks and various mouth movements to show they are engaged with the person they are talking to, and eventually the system learned to do this too.

 

The bot was then able to look at a video of a human speaking, and choose in real time what the most appropriate facial response would be. If the person was laughing, for example, the bot might choose to open its mouth too, or tilt its head.

The Facebook team then tested the system with panels of people who watched animations that included both the bot reacting to a human, and a human reacting to a human. The volunteers judged the bot and the human to be equally natural and realistic.

However, as the animations were quite basic, it’s not clear whether a humanoid robot powered by this algorithm would have natural-seeming reactions.

Additionally, learning the basic rules of facial communication might not be enough to create truly realistic conversation partners, says Goren Gordon at Tel Aviv University in Israel. “Actual facial expressions are based on what you are thinking and feeling.”

Source:

https://www.newscientist.com/article/2146294-facebook-ai-learns-human-reactions-after-watching-hours-of-skype/

The world’s top artificial intelligence companies are pleading for a ban on killer robots

A revolution in warfare where killer robots, or autonomous weapons systems, are common in battlefields is about to start.

Both scientists and industry are worried.

The world’s top artificial intelligence (AI) and robotics companies have used a conference in Melbourne to collectively urge the United Nations to ban killer robots or lethal autonomous weapons.

An open letter by 116 founders of robotics and artificial intelligence companies from 26 countries was launched at the world’s biggest artificial intelligence conference, the International Joint Conference on Artificial Intelligence (IJCAI), as the UN delays meeting until later this year to discuss the robot arms race.

Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales, released the letter at the opening of the opening of the conference, the world’s pre-eminent gathering of experts in artificial intelligence and robotics.

 

The letter is the first time that AI and robotics companies have taken a joint stand on the issue. Previously, only a single company, Canada’s Clearpath Robotics, had formally called for a ban on lethal autonomous weapons.

In December 2016, 123 member nations of the UN’s Review Conference of the Convention on Conventional Weapons unanimously agreed to begin formal talks on autonomous weapons. Of these, 19 have already called for a ban.

“Lethal autonomous weapons threaten to become the third revolution in warfare,” the letter says.

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.

“These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

Signatories of the 2017 letter include:

  • Elon Musk, founder of Tesla, SpaceX and OpenAI (US)
  • Mustafa Suleyman, founder and Head of Applied AI at Google’s DeepMind (UK)
  • Esben Østergaard, founder & CTO of Universal Robotics (Denmark)
  • Jerome Monceaux, founder of Aldebaran Robotics, makers of Nao and Pepper robots (France)
  • Jü rgen Schmidhuber, leading deep learning expert and founder of Nnaisense (Switzerland)
  • Yoshua Bengio, leading deep learning expert and founder of Element AI (Canada)

Walsh is one of the organisers of the 2017 letter, as well as an earlier letter released in 2015 at the IJCAI conference in Buenos Aires, which warned of the dangers of autonomous weapons.

The 2015 letter was signed by thousands of researchers working in universities and research labs around the world, and was endorsed by British physicist Stephen Hawking, Apple co-founder Steve Wozniak and cognitive scientist Noam Chomsky.

“Nearly every technology can be used for good and bad, and artificial intelligence is no different,” says Walsh.

“It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change and the ongoing global financial crisis. However, the same technology can also be used in autonomous weapons to industrialise war.

“We need to make decisions today choosing which of these futures we want. I strongly support the call by many humanitarian and other organisations for an UN ban on such weapons, similar to bans on chemical and other weapons,” he added.”

Ryan Gariepy, founder of Clearpath Robotics, says the number of prominent companies and individuals who have signed this letter reinforces the warning that this is not a hypothetical scenario but a very real and pressing concern.

“We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability,” he says.

“The development of lethal autonomous weapons systems is unwise, unethical and should be banned on an international scale.”

Source:

http://www.businessinsider.com/top-artificial-intelligence-companies-plead-for-a-ban-on-killer-robots-2017-8

How AI, AR, and VR are making travel more convenient

From 50 ways to leave your lover, as the song goes, to 750 types of shampoos, we live in an endless sea of choices. And although I haven’t been in the market for hair products in a while, I understand the appeal of picking a product that’s just right for you, even if the decision-making is often agonizing. This quandary (the “Goldilocks Syndrome”, of finding the option that is “just right”) has now made its way to the travel industry, as the race is on to deliver highly personalized and contextual offers for your next flight, hotel room or car rental.

Technology, of course, is both a key driver and enabler of this brave new world of merchandising in the travel business. But this is not your garden variety relational-databases-and-object-oriented-systems tech. What is allowing airlines, hotels and other travel companies to behave more like modern-day retailers is the clever use of self-learning systems, heuristics trained by massive data sets and haptic-enabled video hardware. Machine learning (ML), artificial intelligence (AI), augmented reality (AR) and virtual reality (VR) are starting to dramatically shape the way we will seek and select our travel experiences.

Let every recommendation be right

AI is already starting to change how we search for and book travel. Recent innovation and investment has poured into front-end technologies that leverage machine learning to fine tune search results based on your explicit and implicit preferences. These range from algorithms that are constantly refining how options are ranked on your favorite travel website, to apps on your mobile phone that consider past trips, expressed sentiment (think thumbs up, likes/dislikes, reviews) and volunteered information like frequent traveler numbers.

Business travel, as well, is positioned for the application of AI techniques, even if not all advances are visible to the naked eye. You can take photos of a stack of receipts on your smartphone; optical character recognition software codifies expense amounts and currencies, while machine learning algorithms pick out nuances like categories and spending patterns.

AI is also improving efficiencies in many operational systems that form the backbone of travel. Machine learning is already starting to replace a lot of rule-based probabilistic models in airport systems to optimize flight landing paths to meet noise abatement guidelines, or change gate/ramp sequencing patterns to maximize fuel efficiency.

Making decisions based on reality

VR and AR are still changing and evolving rapidly, with many consumer technology giants publicly announcing products this year we can expect to see rapid early adoption and mainstreaming of these technologies. Just as music, photos, videos and messaging became ubiquitous thanks to embedded capabilities in our phones, future AR and VR applications are likely to become commonplace.

VR offers a rich, immersive experience for travel inspiration, and it is easy to imagine destination content being developed for a VR environment. But VR can also be applied to travel search and shopping. My company, Amadeus, recently demonstrated a seamless flight booking experience that includes seat selection and payment. Virtually “walking” onto an airplane and looking a specific seat you are about to purchase makes it easier for consumers to make informed decisions, while allowing airlines to clearly differentiate their premium offerings.

TechCrunch Disrupt SF 2015 - Day 1

AR will probably have a more immediate impact than VR, however, in part due to the presence of advanced camera, location and sensor technology already available today on higher-end smartphones. Airports are experimenting with beacon technology where an AR overlay would be able to easily and quickly guide you to your tight connection for an onward flight, or a tailored shopping or dining experience if you have a longer layover.

“Any sufficiently advanced technology is indistinguishable from magic,” goes Arthur C. Clarke’s famously quoted third law. But as we expect more authentic experiences: precise search results, an informed booking or an immersive travel adventure, we can count on increasingly magical technology from systems that learn to deliver us our “perfect bowl of porridge.”

Source:

https://venturebeat.com/2017/08/03/how-tech-is-making-travels-inconveniences-much-more-convenient/

Advertisements
%d bloggers like this: