Tech site Mashable is being sold to publishing giant Ziff Davis, according to a new report from The Wall Street Journal. The website, which focuses on tech and tech-related stories and has been publishing since 2005, will sell to Ziff Davis for around $50 million according to the report, which is far less than its valuation of $250 million, from a funding round it raised in March last year.
The all-out sale of Mashable comes after the publisher tried to secure additional funding throughout this year, according to the WSJ’s sources. After failing to receive adequate interest in a raise, it sought instead an all-out sale starting just a few months ago.
The report also claims that while a push towards video at the site initially resulted in a rosier revenue picture, it’s now on track to post a loss for 2017, despite its millions of monthly visitors.
Today Facebook is announcing that users can now order food for takeout or delivery using both the Facebook mobile app and website. But it’s not at all what you might think; Facebook hasn’t created its own answer to Seamless, which would be massive news for the restaurant industry. This isn’t that.
Instead, the company is partnering with existing services GrubHub, Delivery.com, DoorDash, ChowNow, Zuppler, EatStreet, Slice, and Olo, and will now link out to those food ordering businesses for restaurants that support them. You head to the new “Order Food” area of Facebook under the Explore section, find the local spot you’re craving, and then hit “start order.” From there, if a restaurant supports more than one of Facebook’s ordering partners, you’ll be able to choose between them. Once you do, Facebook will bring up an in-app browser that takes you through the existing websites for Delivery.com and the others. That’s where all the ordering actually happens, so you’re not actually doing much with the Facebook app beyond finding a restaurant and tapping your preferred delivery option.
Seamless is not currently among Facebook’s partner services, but parent company GrubHub is, so that should get you most of the same delivery restaurants. But there are other omissions such as Caviar, so you’ll still need to open those apps separately to know which restaurants use them and place an order.
Facebook is also partnering on food ordering directly with national chains Chipotle, Five Guys, Jack in the Box, Papa John’s, Wingstop, Panera, TGI Friday’s, Denny’s, El Pollo Loco, and Jimmy John’s. But it works the same way as with the other services; you browse to one of these nearby chain locations, pick start order, and then you’ll be sent to their existing delivery system. All Facebook is really doing here is launching an in-app browser so you can get a meal without ever leaving the app.
“We’ve been testing this since last year, and after responding to feedback and adding more partners, we’re rolling out everywhere in the US on iOS, Android and desktop,” Alex Himel, Facebook’s VP of local, said in a press release. “People already go to Facebook to browse restaurants and decide where to eat or where to order food, so we’re making that easier.”
CLEVELAND, Ohio — Training and education may not be enough to robot-proof your job, a recent report dealing with the impact of automation and offshoring on job loss shows.
The 10 jobs most vulnerable to automation include mathematical science occupations and insurance underwriters, according to a Ball State University study. A college degree is usually required for these occupations, both of which have median annual salaries of more than $65,000.
Many of the other top 10 jobs most vulnerable to automation pay less and don’t require the same level of education.
However, all on the list have something in common, according to “How Vulnerable are American Communities to Automation, Trade and Urbanization?”
“The study found that low risk of automation is associated with much higher wages, averaging about $80,000 a year,” states a news release on the report. “Occupations with the highest risk of automation have incomes of less than $40,000 annually.”
Only one of the jobs least at risk of automation — occupational therapist — paid about $80,000 a year, according to the updated report released last week, but published in June.
Like most of the other robot-proof jobs, occupational therapist is a “high touch” occupation, or one in which direct interaction with clients and/or colleagues is routinely required. Most of the least vulnerable jobs are in health care and related fields.
The study looked at communities throughout the United States that are most at risk of job loss due to automation. No Ohio counties made the top 25 list. Ranking first was the Aleutians East Borough, Alaska followed by Quitman County, Georgia and Aleutians West Census Area, Alaska.
“Automation is likely to replace half of all low-skilled jobs,” said Michael Hicks, director of Ball State’s Center for Business and Economic Research, in the release. “Communities where people have lower levels of educational attainment and lower incomes are the most vulnerable to automation. Considerable labor market turbulence is likely in the coming generation.”
The report also looked at jobs most at risk of being off-shored. Several of them had median annual salaries in the $80,000 range or higher. They included: computer programmers ($79,530), computer and information research scientists ($80,110), actuaries ($97,070), mathematicians ($111,110) and statisticians ($110,620).
One in four of all U.S. jobs will be at risk of being lost to foreign competition in the coming years, the report says.
The report incorporates research on automation and offshoring published in recent years, as well as an analysis of government and other data.
TOP 10 JOBS MOST VULNERABLE TO AUTOMATION
1. Data entry keyers. Annual median wage is $29,460
2. Mathematical science occupations, $66,210
3. Telemarketers, $23,530
4. Insurance underwriters, $65,040
5. Mathematical technicians, $46,600
6. Hand sewers, $23,640
7. Tax preparers, $36,450
8. Photographic process workers and processing machine operators, $26,590
9. Library technicians, $32,310
10. Watch repairers, $34,750
TOP 10 JOBS LEAST VULNERABLE TO AUTOMATION
1. Recreational therapists. Annual median wage is $45,890
2. Emergency management directors, $67,330
3. First-line supervisors of mechanics, installers, and repairers, $63,010
4. Mental health and substance abuse social workers, $42,170
Facebook has its own version of Apple’s Face ID. If you get locked out of your Facebook account, the company is testing a way to regain access by using your face to verify your identity. That could be especially useful if you’re somewhere that you can’t receive two-factor authentication SMS, like on a plane or while traveling abroad, or if you lose access to your email account.
Social media researcher Devesh Logendran (a pseudonym) sent a screenshot of the feature to TNW’s Matt Navarra. We asked Facebook about it and got this confirmation:
“We are testing a new feature for people who want to quickly and easily verify account ownership during the account recovery process. This optional feature is available only on devices you’ve already used to log in. It is another step, alongside two-factor authentication via SMS, that were taking to make sure account owners can confirm their identity.”
If the feature proves reliably helpful to users and isn’t fooled by hackers, Facebook could potentially roll it out to more people.
Over the years Facebook has tried a number of novel ways to help you get back into a locked account. In some cases it asks you to identify photos of your friends to prove you’re you. Or it’s tried allowing you to designate several “trusted friends” who receive a code that you can ask them for to unlock your account.
While Facebook has experienced some backlash to facial recognition for photo tag suggestions in the past, this feature would only use the technology to privately help you out. Therefore it shouldn’t engender as big of privacy concerns, though obviously anything related to biometric data can give people pause. But if it means you can get back to your messages and News Feed, or repair damage done by a hacker, many people are likely to be comfortable to use their face to Facebook.
There’s nothing quite like “borrowing” an idea from someone else in the tech world. It’s all about how you implement the idea, how you make sure the idea is still general enough that it is not outright theft, and then how your user base reacts to the change.
That’s what makes a new feature on the iPhone, called Do Not Disturb While Driving, so interesting. It’s something Android users have enjoyed (or been annoyed by) for a while. On the iPhone, it means your phone is basically locked. When you use the mode and pick up your phone, you’ll see a screen that says your phone is disabled.
When you get a message or receive a phone call, the iPhone can then send a message back that you’re driving. To enable the feature on any iPhone that runs iOS 11, just head to Settings and enable the Do Not Disturb While Driving feature. You can set it to activate automatically when the iPhone senses you are driving or manually when you decide to use it. (A chip inside the phone can sense movement that could only be a car.)
Over the last week, I’ve use the feature many times. Well, to be more specific–I’ve stopped after driving to sit idle in a parking lot or the curb and picked up my phone, only to realize that it was impossible for me to check for a text or glance at my iTunes playlist.
You can go through a few settings to disable it of course, but it’s really a reminder to stay safe, remain vigilant, and keep your attention on the road. And here’s the amazing part. It worked. I refrained from glancing at the phone, even though it was safe to do so, and I decided to just wait until I was out of the vehicle entirely.
We know distracted driving is an issue, because accidents and fatalities on the road have risen slightly in the last year or two. It’s a problem because your brain goes into a strange blackout mode where all you see is the screen and nothing else–no pedestrians, no other cars, no roadside objects. It’s a good thing the brain does this, because it allows us to focus. It’s a bad thing when you are driving 70 miles-per-hour in heavy traffic.
For Android users, the feature has been available since last year at least. I recall using it with a Google Pixel phone connected using Android Auto to several makes and models, including a nice sports sedan with a lot of horsepower. The feature also blocks messages and calls. There’s no way to prove Apple noticed this feature and added it, but the Pixel essentially does the same thing–sensing the car is moving and blocking calls.
You can use a custom message on the Phone to send back to people to let them know you are driving, and you can select whether all calls are blocked or just those not in your contacts or favorites. Anyone can use a trigger word (“urgent”) to contact you even if you are Do Not Disturb mode.
Back in April, Lyft launched features that made its system easier to use by deaf drivers and those who are hard of hearing. Now, it’s adding a couple more to celebrate National Deaf Awareness Month. Thanks to its partnership with the National Association of the Deaf, the ride-hailing firm has developed “flash-on request” for drivers.
If they’ve activated the app’s hard-of-hearing accessibility function, they’ll get a powerful visual notification whenever a ride request comes in: their phone’s screen and flashlight will both light up. When combined with the Amp emblem flashing the words “New Ride,” it could lower the chances of a driver missing out on a request.
In addition, Lyft is also making an attempt to breach the language barrier between drivers and passengers. It’s beefing up the automated text it sends out notifying passengers that their drivers are deaf or hard of hearing with a link to a tutorial on how to say “Hello” and “Thank you” in American Sign Language. The company didn’t say when the features will be available exactly, but it promises to roll them out soon.
Forget the arms race or space race — the new battle for technological dominance revolves around AI, according to Vladimir Putin. The Russian President told students at a career guidance forum that the “future belongs to artificial intelligence,” and whoever is first to dominate this category will be the “ruler of the world.” In other words, Russia fully intends to be a front runner in the AI space. It won’t necessarily hog its technology, though.
Putin maintains that he doesn’t want to see anyone “monopolize” the field, and that Russia would share its knowledge with the “entire world” in the same way it shares its nuclear tech. We’d take this claim with a grain of salt (we wouldn’t be surprised if Russia held security-related AI secrets close to the vest), but this does suggest that the country might share some of what it learns.
Not that this reassuring long-term AI skeptic Elon Musk. The entrepreneur believes that the national-level competition to lead AI will be the “most likely cause of WW3.” And it won’t even necessarily be the fault of overzealous leaders. Musk speculates that an AI could launch a preemptive strike if it decides that attacking first is the “most probable path to victory.” Hyperbolic? Maybe (you wouldn’t be the first to make that claim). It assumes that countries will put AI in charge of high-level decision making, Skynet-style, and that they might be willing to go to war over algorithms. Still, Putin’s remarks suggest that his concern has at least some grounding in reality — national pride is clearly at stake.
China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.
Facebook has announced two new updates that will limit video clickbait posts from appearing in the News Feed. The posts being targeted are those that have fake video play buttons embedded into an image, and videos of a static image.
Facebook’s algorithm actively promotes videos, especially longer ones. Spammers have exploited this to trick users into clicking links to low-quality websites and those with malicious ads. Users started noticing static images disguised as videos a little while ago where some pages were gaming Facebook’s algorithm by just uploading static memes as 10-second videos.
“Publishers that rely on these intentionally deceptive practices should expect the distribution of those clickbait stories to markedly decrease,” Facebook engineers Baraa Hamodi, Zahir Bokhari, and Yun Zhang, wrotein a blog post. “Most Pages won’t see significant changes to their distribution in News Feed.”
The demotion of video clickbait posts will roll out over the next few weeks. In May, the company rolled out more tweaks to the News Feed to limit clickbait posts.
Facebook has been taking a more aggressive approach to moderating content on its platform since the US election, after the social networking site was criticized for not doing enough to combat fake news proliferating on its platform.
Communication technologies are constantly advancing to keep up with the times. Messaging apps are huge right now. Completely overtaking social media by becoming the primary way we communicate online.
When most entrepreneurs are starting out, they like to read articles on “how to make a killing with your first app,” “building the multi-billion dollar app” and most books related to this topic. They are glued to this side of the story and blinded to the other. To have your own success story you have to find out why other apps fail. The painful truth is there are more failed apps than successful ones.
Apple will make $2 million of donations to civil rights groups working to fight white supremacism such as that on display in Charlottesville, and it will furthermore match employee donations to similar causes on a two-for-one basis. There will soon also be an option added to iTunes for Apple users to contribute to supporting one of Apple’s chosen organizations, the Southern Poverty Law Center. In an apparently related move, Apple Pay has ceased accepting payments on websites selling white supremacist and Nazi gear.
The email in full:
Like so many of you, equality is at the core of my beliefs and values. The events of the past several days have been deeply troubling for me, and I’ve heard from many people at Apple who are saddened, outraged or confused.
What occurred in Charlottesville has no place in our country. Hate is a cancer, and left unchecked it destroys everything in its path. Its scars last generations. History has taught us this time and time again, both in the United States and countries around the world.
We must not witness or permit such hate and bigotry in our country, and we must be unequivocal about it. This is not about the left or the right, conservative or liberal. It is about human decency and morality. I disagree with the president and others who believe that there is a moral equivalence between white supremacists and Nazis, and those who oppose them by standing up for human rights. Equating the two runs counter to our ideals as Americans.
Regardless of your political views, we must all stand together on this one point — that we are all equal. As a company, through our actions, our products and our voice, we will always work to ensure that everyone is treated equally and with respect.
I believe Apple has led by example, and we’re going to keep doing that. We have always welcomed people from every walk of life to our stores around the world and showed them that Apple is inclusive of everyone. We empower people to share their views and express themselves through our products.
In the wake of the tragic and repulsive events in Charlottesville, we are stepping up to help organizations who work to rid our country of hate. Apple will be making contributions of $1 million each to the Southern Poverty Law Center and the Anti-Defamation League. We will also match two-for-one our employees’ donations to these and several other human rights groups, between now and September 30.
In the coming days, iTunes will offer users an easy way to join us in directly supporting the work of the SPLC.
Dr. Martin Luther King said, “Our lives begin to end the day we become silent about the things that matter.” So, we will continue to speak up. These have been dark days, but I remain as optimistic as ever that the future is bright. Apple can and will play an important role in bringing about positive change.
Google’s smart speaker can now pull double duty as a phone for voice calls. The company just confirmed that it’s rolling out Google Home’s calling feature in the US and Canada beginning today. Users can dial anyone in their contacts and local businesses for free — so long as the call recipient is in one of those two countries. The calling feature was first announced back in May.
In turning its speaker into a phone, Google is taking another step to challenge Amazon and its Echo devices, which introduced calling and messaging features earlier this year. But the two companies take a significantly different approach in how the feature actually works and who you’re able to communicate with.
HOW TO CALL SOMEONE WITH YOUR GOOGLE HOME SPEAKER
To place calls with Home, you just say “OK Google, call (recipient).” You can also do “Hey Google” if that’s your preferred phrase for activating the speaker. The person you’re calling needs to be stored in Google Contacts for things to work right, so if you’re using another app or service for contact management, you’ll want to make sure those numbers are also in Google’s cloud. Though it might seem like Home is basically just acting as a speakerphone, that’s not the case. Calls are made over Wi-Fi, so they don’t use your phone plan’s minutes. In fact, Google Home calling is entirely separate from your smartphone. That’s both good and bad at the moment, which I’ll get into next.
WHAT ARE THE DIFFERENCES BETWEEN GOOGLE HOME AND AMAZON ALEXA CALLING?
Google Home lets you call anyone in your contacts; it doesn’t matter if they also own a Google Home speaker or not. You’re calling their actual phone. With Alexa calling, you’re always calling someone else’s Echo device or their Alexa smartphone app. That’s the major difference between the two, and definitely swings in Google’s favor.
There’s no way to call someone else’s Google Home like you can make Echo to Echo calls with Alexa. Google only supports outgoing calls. If you’re a fan of video chat, Amazon wins this one since you can make face-to-face calls with two Echo Shows or an Echo Show and the Alexa app.
If you’re not a Google Voice or Project Fi user, the person you’re calling from Google Home won’t see a recognizable phone number. Instead, they’ll see “unknown” or “no caller ID,” which might make someone hesitant to pick up. Just think of all the mobile spam calls we’re dealing with these days. You might find yourself leaving a lot of voicemails! Users of Google’s phone-related services Voice and Fi can link their number to Home right away to avoid this inconvenience and have that number displayed to recipients. Google has promised to have it working for everyone else by the end of the year. Please hurry, Google.
The only way to use Google Home voice calling is with your Google Home device. Amazon’s Alexa calling and messaging can be done using the Alexa app when away from your speaker, but again, since that’s uniquely between Echo devices, it doesn’t really make sense for Google’s approach.
Unless you’re linking a Google Voice or Project Fi number, you don’t need to configure any settings before placing your first call; Home has access to your Google Contacts and is also smart enough to call the right businesses you request.
Google Home can identify different users in your house by voice, so if you say “OK Google, call dad” it will call your dad without needing to ask which user is making the request. Pretty neat. Though even a single slip-up there could get awkward…
WAIT, WHAT? I DON’T WANT PEOPLE TO THINK I’M A SPAMMER. HOW DO I LINK MY GOOGLE VOICE OR PROJECT FI NUMBER TO GOOGLE HOME?
You can tell Google Home to display the phone number you’ve got tied to either Google Voice or Project Fi by going to the Assistant settings in your Google Home smartphone app for Android or iOS. Once that’s done, recipients will see your number show up instead of the terrible “no caller ID” thing.
911 CALLS ARE NOT SUPPORTED YET
It’s super important to know that you cannot initiate emergency calls to 911 using Google Home at this time. This is probably because calls are actually made over Wi-Fi and not with your mobile device, so 911 might have trouble pinpointing an accurate location for whoever’s calling. Still, this seems like something Google should work to resolve. Being able to yell out for 911’s help if you can’t make it to a phone seems like a pretty critical use case for a device inside your house that can now do voice calling.
The information below is the reason I wrote this book, drones will be commercialized in the future surrounding the year 2025 according to research I’ve seen. Now is the time as an entrepreneur for making money with drones.
Commercial drones and their services are expected to become a multibillion-dollar industry in the next decade, according to a new report from market intelligence firm Tractica. The report says that in 2017, drone revenue should amount to $792 million — mostly from hardware sales. By 2025, Tractica predicts the market will rise to $12.6 billion, with two-thirds of the revenue coming from drone-based services rather than hardware. “A number of major industries are seeing strong value propositions in utilizing drones for commercial use,” says Tractica research analyst Manoj Sahi. He named media, real estate and disaster relief as just a few of the industries that could use drone-enabled services. The report says that advances in technology, economies of scale, cloud-based applications and the drive to disrupt the market will contribute to commercial drone success in the coming years.
We all have only 24 hours in a day and seven days in a week. Many of us at some point become overworked and underpaid. If you still feel you have some work to do to reach your career goals, perhaps it’s time to make some changes to how you are working.
Elon Musk is a brilliant businessman, engineer, and inventor who has taken the world by storm with his forward thinking. While we may not all be on the same mission, one thing that we may share is a need to improve our work-life balance as entrepreneurs.
Here’s how to get more accomplished each day, the easy way.
1. Start your day off right
Sometimes, coffee is the breakfast of champions. Billionaire Elon Musk wakes up and spends about 30 minutes addressing high-priority emails and having coffee. This allows him to start his day off on the right foot, by crossing critical tasks off his list first thing in the morning. If you’re anything like me (with a penchant for saving the most difficult tasks until the last minute), then this might be a good method to help you boost your productivity.
2. Change the way you look at social media
You hear it all the time: People say that social media marketing is the key to a successful business. Remember to keep in mind that the organic reach of social media is low and the algorithms are ever changing. If you want reach, it’s going to cost you — that’s how social media makes its money these days, after all.
For this method to work for you, unique content is the way to go. If you’ve found yourself stuck in a rut of sharing others’ work and not creating any of your own, you’ve got to step it up. Get noticed in the media, contribute articles, and get interviewed on podcasts to get exposure.
Another way to utilize social media is to reach out for collaborations with businesses that have a like-minded following. People will share your work via their social platforms when they feel it offers their readers valuable information or even entertaining content.
3. Change the way you chase opportunities
Recognize that today’s tools and technology have opened the doors of opportunity for entrepreneurs. There’s so much you can do if you have the ideas, the initiative, and the will to keep going. Take control of your career, and remember execution is key!
First, get focused by spending time this week giving your business an honest evaluation, and then get to work. Keep an eye out for new opportunities, continue to meet new people and make new connections, and always have goals in place.
Know what you want to accomplish next, and continue to meet and set new goals. Write them down to make them real. Always try to get more exposure, and don’t limit your thinking.
4. Change the way you take advice
When you run your own business, everyone will have advice for you, coaches to friends and family members. Sometimes it’s good advice, but sometimes you have to listen to your instincts and do what you think is right. After all, you are the entrepreneur.
You have to figure out what makes sense for your business. Listen to someone else’s ideas and put your own spin on them. Trial and error has always been my best friend. It allows me to try something my way, and if it fails, I understand why and can move forward without wondering, “What if?”
5. Change the way you work
Some days you feel on top of the world and unstoppable. Other days you might be running low on motivation. Strive to keep focused, and continue to chip away at your goals. Complete the tasks that are not as difficult to get you going. Just keep putting one foot in front of the other, and you’ll be happy that you didn’t waste your time.
6. Change the way you price your work
How much is your time worth? Just because you are good at something, it doesn’t mean you should do it. Is what you’re doing feeding your soul, making you a better person, or feeding your family?
Remember that your time is valuable, and that you don’t have to take jobs you don’t enjoy. If for the time being you’re stuck doing something you’re good at but don’t love, at least make sure it makes sense financially.
You might be cringing at the thought of seeing ads in Facebook Messenger, but Facebook doesn’t appear to have those reservations. The social network has revealed that it’s expanding its beta test of home screen Messenger ads worldwide in the weeks ahead. It’ll be a slow rollout, but the targeted promos should be widely visible by the end of 2017. At least the company isn’t shy about why it’s pushing forward.
Messenger product lead Stan Chudnovsky tells VentureBeat that it’s a simple matter of income: advertising is “how we’re going to be making money right now.” There are “other business models” under consideration, he says, but they all tie into ads. In short: don’t expect Facebook to have second thoughts as long as it’s making billions of dollars in profit from ads.
Facebook does care about the kinds of ads you see. While it’s fine with ads kicking you to a website, it would prefer that ads lead to chats with businesses. You’re more likely to respond to an ad if it takes you to another conversation inside the chat app, Chudnovsky says. The question is whether or not people will simply roll with the changes or balk at them. It’s entirely likely that people will just shrug and move on, but there is a chance this could steer some users toward ad-free alternatives.
As technology becomes more and more enmeshed within our everyday lives, it’s surely only a matter of time before more of us start wearing it on our bodies. It’s been dismissed as a fad, but according to a 2014 study by Forbes, 71% of 16- to 24-year-olds want wearable tech. After a couple of false starts with products like Google Glass, we could soon be seeing some more promising developments in this field, thanks in part to a recent breakthrough by scientists in South Korea. They have come up with a new way of 3D printing electronic microstructures, which will be useful in the construction of all kinds of components, particularly for wearable tech.
The development of conceptually new technology applications is dependent in part on producing new structures and shapes for highly conductive materials. The smaller the structure, the smaller the electrical components need to be, and this gives designers and inventors more freedom to implement technology in new ways. 3D printing has been used in the past to make tiny structures that can be used for electronic components, but the technology was relatively limited in usage, according to the head of the South Korean research team, Seol Seung-Kwon. He and the rest of his scientists from the Korea Electrotechnology Research Institute were able to 3D print highly conductive carbon nanotubes by developing a new type of printing nozzle.
The statement from the researchers says that, “To achieve high-quality printing with continuous ink flow through a confined nozzle geometry, that is, without agglomeration and nozzle clogging, we (designed) a polyvinylpyrrolidone-wrapped MWNT ink with uniform dispersion and appropriate rheological properties.” What this breakthrough has achieved is to make the advantages of 3D printing technology, such as its broad design scope and fast, cheap prototyping capabilities, available to electrical engineers without the manufacturing limitations that were previously stalling progress. Engineers making use of 3D printing can now have signficantly more control over the ink that they are using to produce 3D structures.
Making the tiny components needed for wearable technology is one new application that is particularly desirable. Advanced wearables require a bendable material that is still able to integrate a huge amount of miniature circuit boards and components. The carbon nanotubes that can now be 3D printed would fit this requirement perfectly, due to their high level of conductivity and their ability to be fitted together into a complex, flexible structure.
Amazon today is launching a new perk for Prime members that will give them cash back on purchases – even if they’re not paying for items using an Amazon cashback credit card. Through a new rewards program called Amazon Prime Reload, Prime members can receive 2 percent back on purchases when they first load funds into their Amazon Balance using a debit card attached to their bank’s checking account.
Amazon Prime Reload is meant to encourage more people to sign up for Prime, the $99 per year membership program that includes free, 2-day shipping on millions of products, plus same-day shipping in select markets, along with a host of other features like access to Amazon’s Netflix-like service Prime Video, music streaming via Prime Music, free e-books and magazines through Prime Reading, Audible Channels, unlimited photo backup and storage via Prime Photos, Twitch Prime, early access to deals and much more.
However, Amazon Prime Reload has another advantage for the retailer, as well – it may encourage people to load large lump sums into their Amazon Balance, in order to ensure they never accidentally pay for an item through their debit or credit card directly, therefore missing out on the cash back option.
And with additional funds just sitting around in their Amazon account, that could prompt users to make more impromptu purchases, as they won’t have to do the math as to whether the item is something they can afford. Effectively, it feels the same as having a Gift Card balance ready to be used.
In fact, Amazon Prime Reload is built on top of the Gift Card infrastructure that’s already in place, according to the page detailing how the new service works.
Here, Amazon explains how to get started earning rewards.
First, you’ll need a Prime membership if you haven’t yet signed up. Next, you’ll need to provide both your debit card number and U.S. bank account information (account number and routing number) to Amazon, along with your U.S. driver’s license number. You then continue to reload your Gift Card Balance – aka your Amazon Balance – so you have funds available for use when shopping.
Your 2 percent rewards will be added to your Gift Card Balance every time you reload, Amazon explains, instead of being calculated on a per transaction basis.
Amazon says it asks for both your debit card number and bank information because it will sometimes route orders through your debit card to fulfill your reload requests faster. (It doesn’t say when or why that would be the case, however.)
Reloads will make funds available within 5 minutes, in most cases. However some reloads may be delayed up to 4 hours if a closer review is necessary, says Amazon.
Imagine being paralyzed and having an implanted microchip that could action a message from your brain to move your prosthetic arm. Or a diagnostic system that could pick up Alzheimer’s a decade before you develop any symptoms. Or a 3D printing machine that could print a pill with a combination of drugs tailored just for you.
Sound far-fetched? Then meet Dr Daniel Kraft, a Harvard-trained oncologist-cum-entrepreneur-cum-healthcare futurologist. The faculty chair for medicine and founder of Exponential Medicine at the Silicon Valley-based Singularity University, no one could be more serious – or ambitious – about the revolutionary impact that technology will have on the future of healthcare.
The internet of things, constant connectivity, ever cheaper hardware, big data, machine learning: Kraft’s list of converging “meta-trends” goes on. “This set of technologies, especially when meshed together, offers a real opportunity to reshape and reinvent healthcare around the planet,” he says.
Kraft’s vision is of a patient-centred, tech-led healthcare system (as opposed to “sickcare”, as he defines the current system) that promises to turn the medical world on its head. But what implications does it hold for future business of healthcare?
Big pharma is one of the first in line for a shake-up, Kraft warns. Today drug firms’ profits are based on blockbuster drugs for pervasive diseases. But what if medical science reveals (as it is doing) that there are really hundreds of sub-types of diabetes, say, or lung cancer? And what if a patient’s full genome sequence can show the likelihood of a blockbuster treatment not working?
“There’s a spectrum of diseases with different molecular pathways and pharma is going to have to adapt to smaller markets in terms of individual drugs,” Kraft says.
On the flipside, the prospect of people being able to take part in clinical trials on their smartphones promises to drastically speed up the time drugs can get to market. Prescribing an app along with a pill will also become commonplace, he suggests, enabling patients to keep on track with their medicine and adjust their dosage if required. Both potentially promise big returns for the pharmaceutical industry.
Drug distribution is set for a radical overhaul too. Digital device manufacturers are already experimenting with so-called “implantables” that use bioelectric sensors to track patients’ vital signs and release a drug dose as and when required. At the other end of the spectrum, drones are now being used to deliver drugs to remote areas or disaster zones. Matternet, one of 50 or so start-up firms to have spun out of Singularity University, has been doing exactly that in Haiti recently.
Kraft warns that radical change is afoot for healthcare providers as well. Imagine a scenario where patients can compare the results of different hospitals or even individual doctors? Or where patients don’t need to come to a clinic once a month for an electrocardiogram but instead wear a smart Band-Aid “patch” that sends the same information 24/7 to their doctor’s surgery? Patient power, in other words.
According to the Food and Agriculture Organization of the United Nations, the world population will reach 9.1 billion by 2050, and to feed that number of people, global food production will need to grow by 70%. For Africa, which is projected to be home to about 2 billion people by then, farm productivity must accelerate at a faster rate than the global average to avoid continued mass hunger.
The food challenges in Africa are multipronged: The population is growing, but it is threatened by low farm productivity exacerbated by weather changes, shorter fallow periods, and rural-urban migration that deprives farming communities of young people. In Northern Nigeria, herdsmen are moving south looking for pasture as their ancestral lands face severe deforestation. In Somalia, the Shebelle River, which supports many farmers, is drying up, causing additional pains in the war-torn country. The combination of higher food demand, stunted yield potential, and increasingly worse farmland must stimulate a redesigned agro-sector for assured food security. Agriculture accounts for more than 30% of the continent’s GDP and employs more than 60% of its working population.
For decades, African governments have used many policy instruments to improve farm productivity. But most farmers are still only marginally improving yields. Some continue to use traditional processes that depend heavily on historical norms, or use tools like hoes and cutlasses that have not evolved for centuries. In some Igbo communities in Nigeria, where I live, it’s common for farmers to plant according to the phases of the moon and attribute variability in their harvests to gods rather than to their own methods.
Those that do look to leverage new technologies run into financial issues. Foreign-made farm technologies remain unappealing to farmers in Africa because they are cumbersome for those who control, on average, 1.6 hectares of farmland. What’s more, less than 1% of commercial lending goes into agriculture (usually to the few large-scale farmers), so smaller farms cannot acquire such expensive tools.
But this is about to change. African entrepreneurs are now interested in how farmers work and how they can help improve yields. The barrier of entry into farming technology has dropped, as cloud computing, computing systems, connectivity, open-source software, and other digital tools have become increasingly affordable and accessible. Entrepreneurs can now deliver solutions to small-size African farms at cost models that farmers can afford.
For example, aerial images from satellites or drones, weather forecasts, and soil sensors are making it possible to manage crop growth in real time. Automated systems provide early warnings if there are deviations from normal growth or other factors. Zenvus, a Nigerian precision farming startup (which I own), measures and analyzes soil data like temperature, nutrients, and vegetative health to help farmers apply the right fertilizer and optimally irrigate their farms. The process improves farm productivity and reduces input waste by using analytics to facilitate data-driven farming practices for small-scale farmers. UjuziKilimo, a Kenyan startup, uses big data and analytic capabilities to transform farmers into a knowledge-based community, with the goal of improving productivity through precision insights. This helps to adjust irrigation and determine the needs of individual plants. And SunCulture, which sells drip irrigation kits that use solar energy to pump water from any source, has made irrigation affordable.
Beyond precision farming, financial solutions designed for farmers are blossoming. FarmDrive, a Kenyan enterprise, connects unbanked and underserved smallholder farmers to credit, while helping financial institutions cost-effectively increase their agricultural loan portfolios. Kenyan startup M-Farm and Cameroon’s AgroSpaces provide pricing data to remove price asymmetry between farmers and buyers, making it possible for farmers to earn more.
Ghana-based Farmerline and AgroCenta deploy mobile and web technologies that bring farming advice, weather forecasts, market information, and financial tips to farmers, who are traditionally out of reach, due to barriers in connectivity, literacy, or language. Sokopepe uses SMS and web tools to offer market information and farm record management services to farmers.
Apple is putting an end to the scourge of review prompts that seemed to pop up inside of some apps every few days. In a change to the App Store rules this week, Apple said it will now enforce hard limits on how review prompts show up and how often users have to see them. The changes were first spotted by 9to5Mac.
Under the new rules, developers will no longer be able to display review prompts however and whenever they’d like. Instead, there’ll be two key restrictions that should reduce headaches for everyone: First, apps will be required to use a new Apple-made review prompt, which allows users to leave a rating without exiting an app. That’s a huge convenience that may well get a lot more people to leave ratings. Apple introduced the rating prompt a few months ago, but it’s been optional up until now.
The second restriction is on how often that prompt can show up. An app can only display the prompt three times a year, regardless of how often it’s been updated. And once a user has left a rating, they’ll never see it again. Users also have the option to completely disable app review prompts inside the iOS Settings app, preventing the prompts from annoying them at all.
This seems like it should be a win-win for users and developers. People have been annoyed by app review prompts for years, and this update seems to remedy the problem. It may even make people more interested in leaving a review, because it can be done without exiting the app and because it means they’ll be done with the prompt for good. If that results in more reviews — and reviews from users who aren’t annoyed about switching apps — that’s a good thing for developers, too.
Part of the reason developers have their apps show review prompts so often is because Apple has always reset an app’s rating after every update, even very minor ones. With the redesigned App Store, developers will have the option to change that, so that their app’s ratings are maintained between updates. That’s likely to become a common choice — for good apps, at least — since users will only be able to get prompted for a rating once.
Facebook will fund the training of 3,000 Michigan workers for jobs in digital marketing over the next two years, the social media giant’s COO Sheryl Sandberg announced Thursday during a visit to Detroit.
Grand Circus, a computer coding training firm that’s part of Dan Gilbert’s family of companies, will offer the 10-week training courses in Detroit and Grand Rapids in partnership with Facebook.
Sandberg told Crain’s that the Menlo Park, Calif.-based company’s funding of the training is designed to help fill a growing shortage of computer coding jobs and develop talent for a future possible expansion into Michigan.
“Auto is a very important industry for us,” Sandberg said in a interview with Crain’s. “This is a growing part of our business and we’re hoping we can expand here because our business will demand it.”
The training courses at Grand Circus’ offices in the David Broderick Tower next to Grand Circus Park will begin in July, said Damien Rocchi, co-founder and CEO of Grand Circus.
“Facebook’s intention is to do this nationally, but this has been launched here (first),” Rocchi told Crain’s. “I think it’s an endorsement for the tech community that we’ve built here and the sort of traction we’ve been getting in Detroit over the last five or six years.”
Grand Circus is about to graduate its 50th class of coders this summer and said it has 650 graduates working in 120 companies across the state.
Ellen Zimmer, 55, went through Grand Circus’ 10-week training last fall for front-end website development and landed a job at Quicken Loans Inc. in February as a software project manager — after spending 10 years out of the workforce.
“It enabled me to form a network so I knew who was hiring, what kind of skills they were looking,” said Zimmer, who had a previous career in early internet marketing at at the former Ameritech Corp. “It brought me up to current.”
During an announcement speech, Sandberg highlighted Zimmer’s story as “an example” of how training experienced workers in new skills can help get in-demand tech jobs.
“The world changed an awful lot in those 10 years you were out of the workplace,” Sandberg said to Zimmer. “But it didn’t matter because what Ellen needed — she had the core skills — she needed an opportunity to learn and she got that here.”
Sandberg said Facebook will work closely with Grand Circus on training Michigan workers in the areas where Facebook and other companies need help.
“When we can find a great local partner like this that we can partner with to help provide the training people need and we can bring them what we know, it’s just a great opportunity for us to develop people who will go to do great work with Facebook and other local companies,” she said.
Facebook is adding emphasis on getting Grand Circus to train women and racial minorities for jobs in digital and social media marketing, Sandberg said.
“We want to develop diverse talent,” she said. “And we want to make sure that we can get the talent that we need. And some of these people go on to work for other companies — that’s great.”
Facebook operates a small sales office in Birmingham and Sandberg did not rule out a future expansion of the technical end of website’s business in Michigan. “We always start with sales offices,” she said.
Gov. Rick Snyder praised Facebook’s job training initiative.
“This commitment Facebook is making to Michigan shows their confidence in the state and its residents,” Snyder said Thursday in a statement. “Convergence between the tech and manufacturing sectors is becoming more prominent throughout Michigan and the world, making this type of partnership between employers and education to grow the professional trades more important than ever before.”
Sandberg visited Grand Circus’ offices Thursday morning and had a private meeting with Gilbert before announcing the job training initiative with Rocchi before a crowd of Grand Circus graduates, many of whom land jobs down Woodward Avenue at Gilbert’s Quicken Loans.
In her one-day visit to Detroit, Sandberg went from Grand Circus to General Motors Co.’s Detroit-Hamtramck plant to get a tour with GM CEO Mary Barra.
Before the tour, Sandberg and Barra talked about the convergence of automobiles and computer technology in a Facebook Live video recorded at the assembly plant Barra once ran as general manager.
“I think the fact that you’re giving them that core skill of coding, which is going to be necessary in every industry, is just so important,” Barra said of Facebook’s job training initiative.
In response to an article by New Scientist predicting that artificial intelligence will be able to beat humans at everything and anything by 2060, Elon Musk replied that he believed the milestone would be much sooner – around 2030 to 2040.
Probably closer to 2030 to 2040 imo. 2060 would be a linear extrapolation, but progress is exponential. https://t.co/e6gyOVcMZG
— Elon Musk (@elonmusk) June 6, 2017
The New Scientist Study based its story from a survey of more than 350 AI researchers who believe there is a 50% chance that AI will outperform humans in all tasks within 45 years.
At a high level, the data is not shocking, but more of an interesting tidbit from the future. Dive into the details of when those very same AI experts believe machines will be better at specific tasks than humans and things get a little creepy. Experts believe they will be better at translating languages than humans by 2024 – something that is already being done on-the-fly by Google for webpages and for spoken word via Google Translate.
High school students everywhere will be outclassed by AI that is estimated to outperform them in essay writing by 2026. AI moves in to takeover truck driving by 2027 thought we believe this will happen much sooner based on the progress Tesla is making with autonomous driving. Tesla has a fully autonomous cross-country trip planned for later this year that, if successful, will pave the way for autonomous vehicle technology to go mainstream.
The estimates get stranger with AI predicted to be able to write a bestselling book better than humans by 2049 and to perform extremely complex, dynamic surgery by 2053. All human jobs are expected to be automated within 120 years which is admittedly quite a bit farther out than 2060 but that is representative of the long tail of increasingly smaller tasks.
Elon is not all rainbows and sunshine with AI which is why he created the non-profit OpenAI organization. He co-founded the organization specifically to map out a path forward for AI research and development, and to ensure that AI is created in an intentional and safe manner.
OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.
While the individual tasks or groups of tasks that comprise each automated industry from trucking to making tacos at your local taqueria, OpenAI is looking beyond that to the first Artificial General Intelligence. This is an intelligence that will have the ability to adapt dynamically to a situation, learn new tasks, creatively apply itself to the new conditions and to perform much like a human would. OpenAI believes that a dynamic AGI will far surpass the AI implemented in any specific industry and will be a game-changer in AI packing the power to change the world in ways we never imagined.
With that goal in mind, OpenAI is pushing the envelope in an attempt to define the cutting edge of AI and to thereby earn the right to define the future of AI for the world. As famed computer scientist Alan Kay once said, “The best way to predict the future is to invent it.”
Elon surely has his finger on the pulse of AI and believes that it is highly likely that it will have a massive impact on humanity. OpenAI carries this belief forward, stating that,
Artificial general intelligence (AGI) will be the most significant technology ever created by humans.
Though Elon is confident AI is moving forward at a far faster pace than scientists believe and is actively work to shape its future, he still fears the technology.
Any part of a car that talks to the outside world is a potential opportunity for hackers.
That includes the car’s entertainment and navigation systems, preloaded music and mapping apps, tire-pressure sensors, even older entry points like a CD drive. It also includes technologies that are still in the works, like computer vision systems and technology that will allow vehicles to communicate with one another.
It will be five to 10 years — or even more — before a truly driverless car, without a steering wheel, hits the market. In the meantime, digital automobile security experts will have to solve problems that the cybersecurity industry still has not quite figured out.
“There’s still time for manufacturers to start paying attention, but we need the conversation around security to happen now,” said Marc Rogers, the principal security researcher at the cybersecurity firm CloudFlare.
Their primary challenge will be preventing hackers from getting into the heart of the car’s crucial computing system, called a CAN (or computer area network).
And the challenge of securing driverless cars only gets messier as automakers figure out how to design an autonomous car that can safely communicate with other vehicles through so-called V2V, or vehicle-to-vehicle, communication.
The National Highway Traffic Safety Administration has proposed that V2V equipment be installed in all cars in the future. But that channel, and all the equipment involved, open millions more access points for would-be attackers.
It’s not just V2V communications that security experts are concerned about. Some engineers have imagined a future of vehicle-to-infrastructure communications that would allow police officers to automatically enforce safe driving speeds in construction zones, near schools or around accidents.
Given the years long lag time from car design to production, security researchers are also concerned about the shelf life of software deeply embedded in a car, which may no longer be supported, or patched, by the time the car makes it out of the lot.
Not to be outdone by Amazon, Walmart is now piloting a pick-up grocery service.
The retail giant has started testing the new service with a self-serving kiosk stationed in the parking lot of its Warr Acres, Okla., store. Customers order groceries online and, after entering a special five-digit code, can pick them up at the kiosk, Business Insider reported.
While there is no extra cost to using the service, customers must purchase at least $30 worth of groceries to use the new option. More than 30,000 products are available for purchase through the self-service option, and freezers and fridges are used in the kiosk.
The new option from Walmart comes just after Amazon launched a self-serving grocery pick-up service in Seattle, called AmazonFresh Pickup. Like Walmart’s service, customers can order their groceries online then travel to a store for pickup. Amazon’s service differs from Walmart’s in that the groceries will be brought to the car (a license plate scanner identifies the vehicle).
In an effort to bolster its presence online, Walmart also recently launched a home delivery service that uses its in-store employees to deliver items to customers while on their commute home. That service, however, doesn’t include the delivery of perishable groceries.
For years, Google allowed its engineers to spend 20 percent of their time on personal projects they thought would ultimately benefit the company. The tech giant has since scaled back on the policy, replacing it with a more focused approach to innovation, but Google’s famous “20 percent time” gave rise to some of its most successful products, including Gmail and AdSense.
Back in 2010, a Bombay-born engineer named Amit Sood used his “20 percent time” to kickstart the Google Art Project, an effort to digitise the world’s museums, making cultural artefacts accessible in extraordinary detail to millions of internet users. It was a Google-sized ambition that fit the company’s mission to “organise the world’s information and make it universally accessible and useful.”
The project has since grown into the Google Cultural Institute, a non-profit arm of the company, now housed in a grand hôtel particulier in the 9th arrondissement of Paris, that has partnered with over 1,300 museums and foundations to digitise everything from the Dead Sea Scrolls to Marc Chagall’s ceiling at the Opéra Garnier, making them accessible on a platform called Google Arts & Culture.
Now, Google is turning its attention to fashion.
Encouraged by the volume of fashion-related online search queries and the rising popularity of fashion exhibitions, Google’s Cultural Institute has partnered with over 180 cultural institutions — including The Metropolitan Museum of Art’s Costume Institute, the Victoria & Albert Museum and the Kyoto Costume Institute — “to bring 3,000 years of fashion to the Google Arts & Culture platform.”
Called “We Wear Culture,” the initiative, which launches today, is based on the premise that fashion is culture, not just clothes. Led by Kate Lauterbach — a Google program manager who began her career at Condé Nast in New York and later worked for J.Crew’s Madewell — it aims to digitise and display thousands of garments from around the world, stage curated online exhibitions, invite non-profit partners like museums and schools to script and share their own fashion stories, and leverage technologies like Google Street View to offer immersive experiences like virtual walkthroughs of museum collections.
For end users, it’s a cultural rabbit hole and research tool. For partners, it’s a way to reach a much wider audience online, furthering both their educational mandates and marketing objectives. But the benefit to Google is more complex.
After a day’s immersion at Google’s Cultural Institute and associated Lab in Paris, BoF caught up with Lauterbach at the company’s London King’s Cross campus to learn more about the thinking behind the initiative and how digitising the world’s fashion archives unlocks value for the tech giant.
BoF: Tell me about the genesis of the Culture Institute’s fashion project.
KL: Well, starting from art we expanded into culture. We did something around performance art, we did something around natural history; so very different, but the same idea: you take Google technologies, you apply them to this facet of culture and you produce something, you push the bounds, you do something different.
I worked in fashion pre-MBA and I just felt like it was a really interesting subject matter. We were starting to see fashion cropping up in different partners’ collections; it’s a personal passion of mine; and it’s also relevant and interesting and searched for online. It’s a conversation I thought we could bring some value to. We started thinking about it almost two years ago now and began having conversations with places like the V&A and the Costume Institute at the Met.
BoF: The project is named “We Wear Culture.” What does that mean?
KL: We wanted to show that fashion is much deeper than just what you wear; that there’s a story behind it, there’s people behind it, there’s influences that come from art, that come from music, that come from culture more broadly; and, in turn, what we wear influences culture. We really wanted to put fashion on a par with art and artists. You look at their influences, you look at their inspiration, you look at their process, you look at their materials. And we thought that if you can have this kind of singular resource online where all of this was starting to be discussed — and hear it from the authority, I think that’s really critical — it would be valuable.
Uber has fired more than 20 employees in conjunction with an internal investigation into its workplace culture, according to a current Uber employee.
The company disclosed the move at an all-hands meeting at its San Francisco headquarters on Tuesday, said the person, who spoke anonymously because he or she was not authorized to speak publicly about the matter. Uber executives did not name the individuals who were terminated.
Uber is taking steps to correct what many in the ride-hailing company say are deep-seated management and culture issues, which have been brought to light over the last few months. In February, Susan Fowler, a former Uber engineer, said that she was sexually harassed by her supervisor during her time at Uber and that the human resources department ignored the claims. Other employees reported systemic issues within Uber, where a premium was placed on strong performance and growth, often at the expense of other workplace behavior.
Uber has hired former United States Attorney General Eric H. Holder Jr. and his law firm, Covington & Burling, to conduct an independent investigation of those claims and Uber’s overall culture. The findings are not yet out.
Uber’s terminations announced on Tuesday stem from a separate investigation conducted by Perkins Coie, another law firm hired by Uber. Lawyers from Perkins Coie consulted with Uber on the internal investigation, and Uber acted upon that firm’s recommendations.
Mr. Holder’s report has been delivered to Uber’s board, though it is unclear when it will be distributed more widely within the company.
A business school in Paris will soon begin using artificial intelligence and facial analysis to determine whether students are paying attention in class. The software, called Nestor, will be used two online classes at the ESG business school beginning in September. LCA Learning, the company that created Nestor, presented the technology at an event at the United Nations in New York last week.
The idea, according to LCA founder Marcel Saucet, is to use the data that Nestor collects to improve the performance of both students and professors. The software uses students’ webcams to analyze eye movements and facial expressions and determine whether students are paying attention to a video lecture. It then formulates quizzes based on the content covered during moments of inattentiveness. Professors would also be able to identify moments when students’ attention waned, which could help to improve their teaching, Saucet says.
At first, the technology will only be used for students who watch lectures remotely, though Saucet hopes to eventually launch an in-class version that would send real-time notifications to students whenever they’re not paying attention. Speaking to journalists during a demonstration at ESG’s Paris campus last month, Saucet said the technology could vastly improve the performance of students who take massive open online courses, or MOOCs.
“The problem with MOOCs is that they don’t work,” Saucet said. “It’s been 10 years that we’ve been trying e-learning, and in the US it’s been 25 years. And it doesn’t work.”
A press release from the UN’s World Council of Peoples, which hosted last week’s event, described the launch of Nestor as the “first AI led class,” though that’s not entirely accurate. The software is not capable of actually teaching a course, and it’s not the first time that schools have experimented with similar technologies. The IE Business School in Madrid recently created a WOW Room (the acronym stands for “Window on the World”), where professors stand before a wall of screens and lecture students who tune in from afar. Like Nestor, the system uses “emotion recognition systems” to measure students’ attention.
Advocates for AI in education say the technology could be used as a digital tutor that would adapt to a student’s individual needs, and help foster more effective studying habits. Such software could also help teachers by providing quantitative feedback on the effectiveness of their teaching, advocates say. Some researchers have even raised the prospect of AI acting as a “lifelong learning companion” that would accompany students for years.
But AI programs rely on massive troves of personal data, and there are concerns over how such data would be treated. A personalized learning program launched in New York by InBloom, a data analytics company, collapsed in 2014 amid growing concerns over how data on students would be used and protected from hackers.
Saucet says Nestor won’t store any of the video footage it captures and that his company has no plans to sell any other data the software collects. (His company sells its software to schools.) The data would also be encrypted and anonymized, he says. In addition to facial recognition and analysis, the software can integrate with students’ calendars to suggest possible study times, and track their online behavior to pick up on patterns. If a student typically spends their weeknights watching YouTube videos, for example, Nestor could suggest that they instead spend that time studying. Saucet acknowledges, however, that it will ultimately be up to each school to decide how to treat and store such data.
Cadillac is currently developing a vehicle-to-infrastructure (V2I) system, so its vehicles will be able to receive messages from local infrastructure. Right now, it’s limited to two traffic lights outside GM’s Warren Technical Center in Michigan. The work is being done in collaboration with the Michigan Department of Transportation and the Macomb County Department of Roads.
In short, the traffic lights can tell when a vehicle might have an issue with a stoplight based on its current speed. If you’ve ever had a yellow light pop up that forced you to either slam on the brakes or jam on the gas, that’s what this is trying to prevent. A warning will let the driver know ahead of time to either begin slowing down or speed up a bit, which could very well prevent an accident before it happens.
Of course, sending and receiving data like this could be a privacy concern, but Cadillac assures it won’t be a problem. The data being sent doesn’t identify the vehicle in any way, whether it’s the car’s VIN or its registration number. Cadillac also claims the wireless signals cannot be interfered with, thanks to the encryption it uses.
Cadillac already has its V2V system installed in the 2017 CTS sedan. Using GPS and dedicated short-range communications, vehicles can send and receive messages from other cars up to 1,000 feet away. It can let you know when highway traffic comes to a stop, or if a nearby car ends up in a collision.
Other automakers are also dabbling in V2I technology. Audi has its Traffic Light Information system, which can tell drivers when a light is about to turn green, so that a driver can be paying full attention when that happens. It’s only in use in Nevada for now, but it’s likely to roll out to other markets as transportation authorities embrace this kind of fledgling tech.
Mobile users of online discussion forum Reddit will be able to let people know where they’re located.
Reddit, the self-labeled “front page of the internet,” has partnered with location check-in app Foursquare to use its data to power a new Reddit feature debuting today that lets users add their location to any post.
The new feature helps Reddit users add “content and interest” to their posts beyond the usual discussion of politics and pop culture by its 250 million users, Mike Harkey, Foursquare’s vice president of business development, said in a blog post announcing the partnership. He gave the example of users tagging their locations when posting food photos or discussing trips to their local parks. “Think of location in Reddit as an extra emphasis — at-the-ready like the perfect punctuation, or headline,” Harkey wrote.
For Foursquare, the deal marks the latest step in the company’s evolution from a social media app to its more recent incarnation as a “location intelligence” company. Foursquare has previously licensed its database of more than 90 million mapped locations—public places like stores, restaurants, or museums—to companies such as Uber and Airbnb, and the company also began offering up its mobile notification system to developers earlier this year.
Foursquare also pointed out that the location-tagging feature is optional, which is not surprising considering that many Reddit users prefer to remain anonymous on the site. Mobile users who enable location services will simply see a drop-down menu with options for tagging a location.
The move marks Reddit’s latest attempt to increase engagement as the site moves closer to the model of a mainstream social network. In March, Reddit started rolling out public profiles for its users and, last year, the company finally released mobile versions of the site with iOS and Android apps. Adding location-tagging is another way for Reddit to appear more like other social media sites, with some people noting recently that Reddit is beginning to look more and more like Facebook and Twitter.
Reddit has seen quite a few changes since co-founders Alexis Ohanian and Steve Huffman returned to the 12-year-old company in 2015, after interim CEO Ellen Pao stepped down in the wake of a user revolt over her firing of a popular Reddit employee. In addition to the site’s new features, the company’s makeover has seen crackdowns on online harassment and spam, with Ohanian shutting down two popular forums frequented by the “alt-right,” a group often associated with white nationalists and other racist groups.
Just a few days ago, Microsoft announced an upgrade to its Surface Pro touchscreen computing device (Redmond hates when you call it a “tablet”). You might have missed it because not a whole heck of a lot has changed. The Surface Pro now has the latest 7th-gen Intel processors, extended battery life up to 13.5 hours, and a swanky new keyboard covered in Alcantara, a material that feels a bit like suede, but is made from polyester and polyurethane to make it durable. What has significantly changed, however, is the Microsoft Surface Pen, and that could be a big step forward for Microsoft in its current efforts to court the creative class.
The Surface Pen no longer comes bundled with the Surface Pro, but it has gotten performance bumps in almost every column of its spec sheet.
One of the most-notable upgrades is the shortened latency of 21 milliseconds, which is twice as fast as the previous model. According to professional illustrator Clint Baker, responsiveness is key for being able to capture the nuance of an artist’s style. “I want drawing on a digital format to feel just like drawing on paper or canvas,” he told me via email.
Baker says he does about 70 percent of his illustration work digitally using a stylus. He’s currently using a Wacom Cintiq, which is one of the standard setups in the illustrating profession. The large, pro-grade Cintiq displays keep the response rate down around 12 milliseconds, but Microsoft’s Surface Pen is actually faster than Wacom’s more-portable Cintiq displays, which are more comparable in terms of size and price and have a response rate around 25 milliseconds. Apple, not surprisingly, doesn’t disclose the latency of its Pencil.
Pressure sensitivity is another area where the Surface Pen has jumped in performance. It now recognizes 4,096 different levels of pressure, up from 1,024 in the previous model. Baker says this is another important feature. “So much personality comes out in line quality—and that has to do a lot with pressure.” Even the high-end Wacom Cintiq drawing displays only claim 2,048 levels of pressure sensitivity.
The Surface Pen now recognizes the angle of the stylus for controlling line shape and shading. This feature is popular in the Apple Pencil. Microsoft also claims to have reduced the parallax performance, meaning the line you draw will appear closer to the tip of the pen as you move along. The glass of the screen can sometimes make you feel as though you’re separated from the drawing, which can be distracting.
Microsoft has also announced some updates to its pen-based software, like a new Whiteboard app, which acts as a space for collaborated drawing and note taking. The virtual “pencil case” can also now help brushes and Pen settings commute with you between apps.
Messaging app Kik Interactive is the latest and potentially most well-established company to delve into a quirky new form of fundraising — creating its own digital currency.
Kik, based in Waterloo, Canada, unveiled plans for an “initial coin offering,” a process by which it sells tokens that can be used to buy services on its platform. The idea is that as more and more people use Kik, the value of those tokens, called “Kin”, will rise in value.
Interest in coin offerings is high, thanks to surging prices of bitcoin and other virtual currencies. Called ICOs, they give a wide range of people the chance to invest in a company or any other endeavor early on. While unregulated, they have proved popular, with investors spending around $330 million on tokens over the past year, according to data compiled by cryptocurrency blog The Control. Earlier this month, cloud-storage startup Storj raised almost $30 million in five days via an ICO.
Kik, which has raised about $120 million (in real money) from investors including Tencent Holdings Ltd., could serve to add a new layer of legitimacy to the process.
“Kik will be the largest install base of cryptocurrency users in the world,” Chief Executive Officer Ted Livingston said. “Kin, on day one will be the most-used cryptocurrency in the world.”
The move comes as Kik finally reveals how many people actually use its app regularly each month: 15 million. That’s a far-cry from the 300 million total registered users number it was sharing around this time last year.
Kik has traditionally been most popular among teens because, unlike Facebook Inc.’s Messenger or WhatsApp, they don’t need a phone number to use it. Growth has been tough in the past few years though, as teenagers get smartphones earlier and Kik users switch to Facebook apps once they leave high school.
Kik plans to gift a certain amount of Kin to each user. They’ll be able use the new currency to buy games, live video streams and other digital products. The company’s goal is to attract new merchants to sell on the platform, creating a snowball effect where Kin becomes more valuable and more sellers pile onto Kik, increasing its popularity.
“We will create an economy where millions and millions of mainstream consumers are earning in a cryptocurrency for the first time ever,’’ Livingston said. “They’re going to want to spend in that same cryptocurrency as well.’’
When Google said that not sharing photographs of your friends made you “kind of a terrible person” at this year’s I/O keynote, I bristled. The idea that its new Google Photos app would automatically suggest I share pictures with specific people sounded dystopian, especially because so much of the keynote seemed geared toward getting Google’s AI systems to help maintain relationships. Want to answer an email without even thinking about it? Inbox’s suggested responses are rolling out all over Gmail. Has a special moment with somebody slipped your mind? Google might organize photos from it into a book and suggest you have it printed.
Google is far from the first company to do this; Facebook suggests pictures to share and reminds you of friends’ birthdays all the time, for example. It’s easy to describe these features as creepy false intimacy, or say that they’re making us socially lazy, relieving us of the burden of paying attention to people. But the more I’ve thought about it, the more I’ve decided that I’m all right with an AI helping manage my connections with other people — because otherwise, a lot of those connections wouldn’t exist at all.
I don’t know if I’m a terrible person per se, but I may be the world’s worst relative. I have an extended network of aunts, uncles, cousins, and family friends that I would probably like but don’t know very well, and almost never see face-to-face. They’re the kind of relationships that some people I know maintain with family newsletters, emailed photos, and holiday cards. But I have never figured out how to handle any of these things.
Desperate to overcome Japan’s growing shortage of labor, mid-sized companies are planning to buy robots and other equipment to automate a wide range of tasks, including manufacturing, earthmoving and hotel room service.
According to a Bank of Japan survey, companies with share capital of 100 million yen to 1 billion yen plan to boost investment in the fiscal year that started in April by 17.5 percent, the highest level on record.
It is unclear how much of that is being spent on automation but companies selling such equipment say their order books are growing and the Japanese government says it sees a larger proportion of investment being dedicated to increasing efficiency. Revenue at many of Japan’s robot makers also rose in the January-March period for the first time in several quarters.
“The share of capital expenditure devoted to becoming more efficient is increasing because of the shortage of workers,” said Seiichiro Inoue, a director in the industrial policy bureau of the Ministry of Economy, Trade and Industry, or METI.
If the investment ambitions are fulfilled it would show there is a silver lining as Japan tries to cope with a shrinking and rapidly aging population. It could help equipment-makers, lift the country’s low productivity and boost economic growth.
The government predicts investment in labor-saving equipment will rise this fiscal year, Inoue said.
The way Japan copes with an aging population will provide critical lessons for other aging societies, including China and South Korea, that will have to grapple with similar challenges in coming years.
“More than 90 percent of Japan’s companies are small- and medium-sized, but most of these companies are not using robots,” said Yasuhiko Hashimoto, who works in Kawasaki Heavy Industries Ltd’s (7012.T) robot division. “We’re coming up with a lot of applications and product packages to target these companies.”
Among those products is a two-armed, 170-centimeter (5-foot-7) tall robot. Kawasaki says it is selling well because it can be adapted to a range of industrial uses by electronics makers, food processors and drug companies.
Hitachi Construction Machinery (6305.T) says it is getting a lot of enquiries for its computer-programmed digging machines that use a global positioning system to hew ditches that are accurate to within centimeters and can cut digging time by about half.
“We focus on rentals and expect business to pick up in the second half of the fiscal year, which is when most companies tend to order construction equipment for projects,” said Yoshi Furuno, a company official. Hitachi Construction declined to provide figures.
Mid-sized companies are planning on increasing spending much more than large-caps, which are projecting just a 0.6 percent increase in the fiscal year, according to the Bank of Japan. Smaller companies tend to have less flexibility in overcoming labor shortages by paying workers more or by moving production overseas.
WORKING POPULATION PLUNGING
Some companies could end up spending less than originally planned. But with demographics only worsening, companies will need to continue to search for solutions to the labor shortage problem. Japan’s working-age population peaked in 1995 at 87 million and has been falling ever since. The government expects it to fall to 76 million this year and to 45 million by 2065.
In the fiscal year that ended March 31, 2016, mid-sized companies with 100 to 499 workers advertised to fill 1.1 million new positions, the highest in five years and almost five times the number of open positions at companies with 500 workers or more, Labor Ministry data show.
Among the robot makers to report stronger revenue in the last quarter was Fanuc Corp (6954.T). Its revenue was 7.9 percent higher than a year earlier, the first increase in seven quarters.
According to a report by the Outdoor Foundation, Americans log 598 million nights a year under the stars. At an average of $40 in expenses and fees per night, that’s $24 billion spent on campsites alone. Add in all the related costs—gear, transportation, food—and the Outdoor Industry Association figures the industry generates closer to $167 billion annually.
But former investment banker Michael D’Agostino, who grew up camping on a farm in Litchfield, Conn., still calls the industry a broken business.
The tipping point came a few summers ago, when D’Agostino found himself on vacation “directly across from a campsite of 40 people at a Wiccan convention: robes and UFO spotters and streaking and all.” It wasn’t what he’d imagined as a quiet weekend with his wife—counting stars, listening to crickets, bellies full from prime steaks grilled over a man-made fire. “We definitely took them up on some mead,” he said of the Wiccans, “but we had to keep the dog in the tent—she was going bonkers—and it was kind of like camping in Times Square.”
The experience led him to create Tentrr, a free iPhone app that takes the guesswork out of camping. It lets users find and instantly book fully private campsites in vetted, bucolic settings, all within a few hours’ drive of major cities. The sites themselves are all custom-designed by D’Agostino and follow a standardized footprint: They consist of hand-sewn canvas expedition tents from Colorado, set on an elevated deck with Adirondack chairs. You’re also guaranteed to find Brazilian wood picnic tables and sun showers strewn around the campsites, as well as portable camping toilets, fire pits, cookware, and grills. As for the sleeping arrangements? Air mattresses with featherbed toppers, not sleeping bags, are the name of the game.
Tentrr beta-launched last summer with just 50 campsites in New York state, while D’Agostino figured out how to get liability insurers on board with his slice of the sharing economy. Despite the soft opening, the app has already logged $4 million in funding and 1,500 bookings—40 percent of them by people who’d never gone camping before.
In the days leading up to Memorial Day, Tentrr will move past its beta phase with a newly expanded collection of roughly 150 campsites spread across the U.S. Northeast. By July 4 an additional 100 sites will gradually come online, not including a 50-site expansion into the Pacific Northwest. Next year, D’Agostino plans to tackle the “San Francisco-Yosemite corridor, the American Southwest, and counterclockwise around the perimeter of the U.S., all within a few hours of major metropolitan cities, until all of the country’s top-50 hubs are served.” His ultimate vision, however, is global.
Google on Wednesday revealed several new updates for its most popular hardware and services as part of its annual I/O conference. While the developer-centric event has historically focused on Google products like Android and Chrome, this year’s announcements revolved mainly around the search giant’s advancements in artificial intelligence, or AI. That’s been a common theme among Silicon Valley’s top companies lately, setting up AI as the next big tech battleground.
The smart speaker battle is heating up: Just days after Amazon revealed a new Echo device with a screen, Google announced a slew of new capabilities for its own connected speaker, the Home.
The most significant upgrade is that Home users will be able to make hands-free phone calls through the device. Calls to the U.S. and Canada will be free, while Home owners can choose to link their phone number to the gadget. (Amazon recently announced a similar feature, but calling is limited to Echo-to-Echo communication for now).
Because Google Home can tell the difference between various users’ voices, it will know to call the right person depending on who’s placing the call. During a live demo, Google’s Rishi Chandra asked to call his mom, then said that if his wife had uttered the same phrase, the Home would have known to call Chandra’s mother-in-law instead.
Google is also launching a new Home feature called “proactive assistance,” which is basically a different term for notifications (another feature that arrived on the Echo this month.) When the Home’s microphone lights up, users will be able to ask the Home if it has any important updates to share, such as a change to an upcoming calendar appointment or a flight delay.
The Google Assistant can “see”
The Google Assistant digital aide is getting a big visual upgrade. In the coming months, users will be able to point their phone at a sign in a different language and watch as it’s translated before their very eyes. Or, if they aim their phone at a theater, it could show upcoming showtimes and an option to buy tickets. That’s all thanks to Google’s Lens app, which is similar to the Bixby Vision feature Samsung offers on its Galaxy S8 and S8+ smartphones.
Furthermore, the Google Assistant is coming to Apple iPhones as a standalone app. It won’t be baked in at the operating system level like Siri is, so it will be limited in how useful it is for iPhone owners. But it can still do things like the Lens features above.
Android O updates
Google offered new information about what to expect from its next major Android update, which for now is referred to as “Android O.”
One highlight: When downloading an app for the first time, Android may ask if you’d like it to fill in your username if you’ve already used that service in Google Chrome.
Google is also making it easier to copy-and-paste text in Android. If you tap an address, for example, it will automatically select the entire address instead of just a portion of it, and from there it will suggest pasting it into Google Maps.
Other core Android O updates will improve security and battery life and add a picture-in-picture mode, which will let users minimize a video so that it only occupies a portion of the screen.
New Android software for low-end phones
Google is working on a version of Android called Android Go that’s optimized to work on low-end phones with under 1GB of memory (most high-end phones have around 4GB.) Go is also built to help users budget their bandwidth: When using the Android Go version of YouTube, for instance, users will be able to preview videos and see exactly how much data they will eat up before deciding to stream a full clip.
Android Go is similar in spirit to Google’s Android One program, which offers low-cost Android devices to users in developing markets.
Virtual reality without a phone
Google is one of several tech companies pursuing the “holy grail” of virtual reality: Headsets that don’t need to be connected to a computer or smartphone to work. To that end, the search giant announced that standalone VR headsets will be available starting later this year.
HTC — maker of the Steam-compatible Vive headset — and PC maker Lenovo are among the first partners working on these headsets. The search giant collaborated with chipmaker Qualcomm to come up with a reference design.
Google Photos makes real-life albums now
Move over, Shutterfly. Google announced a new service that creates photo books based on the images in your phone’s gallery. If you’re using the Google Photos app, you’ll be able to search for images of a specific person. From there, Google Photos can choose the best photos and arrange them in an album that you can order.
Google also announced other sharing-centric features for Google Photos. You can, for instance, choose to share your entire photo library with your spouse or a family member. If you don’t want them seeing your entire collection, you can limit the sharing to only include photos of specific people, like your kids.
For the first time, a power utility has teamed up with Tesla to use its battery packs for extra grid power during peak usage times. Vermont’s Green Mountain Power (GMP) is not only installing Tesla’s industrial Powerpacks on utility land, it’s also subsidizing home Powerwall 2s for up to 2,000 customers. Rather than firing up polluting diesel generators, the utility can use them to provide electricity around the state. At night, when power usage is low, they’re charged back up again.
Green Mountain Power said the idea started after a power outage knocked out over 15,000 homes. “Three customers who had Powerwalls never lost power, so it carried their home through,” GMP CEO Mary Powell told WCAX-TV. “And unlike a generator, they didn’t have to worry about hooking it up, they didn’t have to worry about whether it was fueled.”
Once in operation, the Powerwalls will stay charged in your home. During times of peak electricity usage when its normal power sources (hydro, nuclear, wind, etc.) max out, GMP will draw from its Powerpacks and the consumer-installed Powerwalls. At night, when demand is low, the batteries are recharged.
Tesla says GMP is the first utility to do such a large-scale “grid-smoothing” installation. “There hasn’t been any really successful large-scale trial, so that’s why this is so exciting,” said Tesla CTO J.B. Straubel. “It’s been in development at Tesla for quite some time, but this is our first real deployment.”
GMP is offering 7kW Powerwalls for $15 a month or a flat fee of $1,500. That’s quite a bargain compared to the regular $3,000 price, but again, it’s only available for 2,000 homes. That’s presumably enough, however, to provide peak power backup in conjunction with the company’s industrial Powerpacks.
GMP thinks the Tesla batteries are not only less polluting than regular generators, but more economical too. “[Backup generators] are some of the dirtiest and … costliest forms of generation,” says Powell. “So when we can produce 10 megawatts of energy, that is an alternative to that peaking generation, that has tremendous economic value.”
Most of us talk to our computers on a semi-regular basis, but that doesn’t mean the conversation is any good. We ask Siri what the weather is like, or tell Alexa to put some music on, but we don’t expect sparkling repartee — voice interfaces right now are as sterile as the visual interface they’re supposed to replace. Facebook, though, is determined to change this: today it unveiled a new research tool that the company hopes will spur progress in the march to create truly conversational AI.
The tool is called ParlAI (pronounced like Captain Jack Sparrow asking to parley) and is described by the social media network as a “one-stop shop for dialog research.” It gives AI programmers a simple framework for training and testing chatbots, complete with access to datasets of sample dialogue, and a “seamless” pipeline to Amazon’s Mechanical Turk service. This latter is a crucial feature, as it means programmers can easily hire humans to interact with, test, and correct their chatbots.
Abigail See, a computer science PhD at Stanford University welcomed the news, saying frameworks like this were “very valuable” to scientists. “There’s a huge volume of AI research being produced right now, with new techniques, datasets and results announced every month,” said See in an email to The Verge. “Platforms [like ParlAI] offer a unified framework for researchers to easily develop, compare and replicate their experiments.”
In a group interview, Antoine Bordes from Facebook’s AI research lab FAIR said that ParlAI was designed to create a missing link in the world of chatbots. “Right now there are two types of dialogue systems,” explains Bordes. The first, he says, are those that “actually serve some purpose” and execute an action for the user (e.g., Siri and Alexa); while the second serves no purpose, but is actually entertaining to talk to (like Microsoft’s Tay — although, yes, that one didn’t turn out great).
“What we’re after with ParlAI, is more about having a machine where you can have multi-turn dialogue; where you can build up a dialogue and exchange ideas,” says Bordes. “ParlAI is trying to develop the capacity for chatbots to enter long-term conversation.” This, he says, will require memory on the bot’s part, as well as a good deal of external knowledge (provided via access to datasets like Wikipedia), and perhaps even an idea of how the user is feeling. “In that respect, the field is very preliminary and there is still a lot of work to do,” says Bordes.
It’s important to note that ParlAI isn’t a tool for just anyone. Unlike, say, Microsoft’s chatbot frameworks, this is a piece of kit that’s aimed at the cutting-edge AI research community, rather than developers trying to create a simple chatbot for their website. It’s not so much about building actual bots, but finding the best ways to train them in the first place. There’s no doubt, though, that this work will eventually filter through to Facebook’s own products (like its part-human-powered virtual assistant M) and to its chatbot platform for Messenger.
JESSE ENGEL IS playing an instrument that’s somewhere between a clavichord and a Hammond organ—18th-century classical crossed with 20th-century rhythm and blues. Then he drags a marker across his laptop screen. Suddenly, the instrument is somewhere else between a clavichord and a Hammond. Before, it was, say, 15 percent clavichord. Now it’s closer to 75 percent. Then he drags the marker back and forth as quickly as he can, careening though all the sounds between these two very different instruments.
“This is not like playing the two at the same time,” says one of Engel’s colleagues, Cinjon Resnick, from across the room. And that’s worth saying. The machine and its software aren’t layering the sounds of a clavichord atop those of a Hammond. They’re producing entirely new sounds using the mathematical characteristics of the notes that emerge from the two. And they can do this with about a thousand different instruments—from violins to balafons—creating countless new sounds from those we already have, thanks to artificial intelligence.
Engel and Resnick are part of Google Magenta—a small team of AI researchers inside the internet giant building computer systems that can make their own art—and this is their latest project. It’s called NSynth, and the team will publicly demonstrate the technology later this week at Moogfest, the annual art, music, and technology festival, held this year in Durham, North Carolina.
The idea is that NSynth, which Google first discussed in a blog post last month, will provide musicians with an entirely new range of tools for making music. Critic Marc Weidenbaum points out that the approach isn’t very far removed from what orchestral conductors have done for ages—“the blending of instruments is nothing new,” he says—but he also believes that Google’s technology could push this age-old practice into new places. “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead,” he says.
The Boundaries of Sound
Magenta is part of Google Brain, the company’s central AI lab, where a small army of researchers are exploring the limits of neural networks and other forms of machine learning. Neural networks are complex mathematical systems that can learn tasks by analyzing large amounts of data, and in recent years they’ve proven to be an enormously effective way of recognizing objects and faces in photos, identifying commands spoken into smartphones, and translating from one language to another, among other tasks. Now the Magenta team is turning this idea on its head, using neural networks as a way of teaching machines to make new kinds of music and other art.
Apple is reportedly planning on upgrading all three of its MacBook products at WWDC this year, according to a report from Mark Gurman at Bloomberg.
The company is said to be working on three updated models: a MacBook Pro with Intel’s latest Kaby Lake processor, a more powerful version of the 12-inch MacBook, and an updated 13-inch MacBook Air, which could get a faster processor as well. But sadly, there’s no word on a better screen. Apple really wants you to think of the 13-inch MacBook Pro as the Air’s successor.
While none of the updates sound like particularly major changes from a hardware perspective, it’s encouraging to see that Apple is taking at least some of the criticism of its latest MacBook Pros to heart and updating the laptop line with Intel’s newest processors. It’s unclear whether other concerns like RAM flexibility and uneven USB-C performance in some models will be addressed. You’ll definitely still need dongles.
The MacBook and MacBook Air are certainly due for an update, having been last refreshed in 2016 and 2015, respectively. WWDC 2017 is scheduled to take place from June 5th to June 9th.
The Washington Post is launching an augmented-reality series today, the start of a push into AR-enhanced storytelling this year.
The first series uses AR to let people explore innovative buildings around the world, starting with the Elbphilharmonie concert hall in Hamburg, Germany, whose structure lets visitors hear and see the same thing no matter where they sit. Readers can access the story on the Post’s app on iOS devices, then point their smartphone’s camera at the ceiling of any room they’re in and tap play. The real ceiling is transformed into the concert hall ceiling while an audio narration by Post art and architecture critic Philip Kennicott plays. Users can also tap a prompt to read an accompanying article by Kennicott.
With AR’s obvious application to visual stories, Kennicott said there’s a question of whether AR will replace the need for critics like him. To him, the answer is that AR can enhance, rather than replace the experience, and hence make criticism more interesting and relevant to readers. “It’s a great way to get people a lot more than what they’re getting from a photographer or video,” Kennicott said.
The series will continue with at least two more installments through the end of the summer. The Post hopes to do around six AR series total this year and plans to expand the AR stories to Android and its Rainbow app.
The Post deliberately started small, with the first video in the series only running about 10 seconds, said Joey Marburger, the Post’s head of product. “With that quick experience, you get more out of the story,” he said. “But we didn’t want it to be the only way you can experience the story. We didn’t want to overdo it.”
Audi is sponsoring the series. Its first ad will appear as a visual, and future ads will take the form of AR branded stories in upcoming installments.
AR is still a new experience for most people and requires prompts to get people to try it. It also doesn’t make sense for every story. But the Post made it a priority this year because unlike virtual reality, it’s less expensive, doesn’t require a headset and advertiser demand is there, Marburger said. The series took six people in editorial and engineering to produce, which is comparable to the size of teams it puts on other projects.
There’s no doubt about it — the Apple Watch is a hit. While Apple has not disclosed sales numbers, smart money has the device, now in its second generation, at over 25 million units sold. That not only means the watch is a scorcher that is now beating the initial trajectory of the iPhone, but recent estimates also crown it the world’s top-selling fitness device, outpacing dedicated fitness trackers from the likes of Fitbit in regards to market share. The Apple Watch has generated more revenue since its debut than the entire Swiss watch industry during that period of time, which is an incredible achievement.
The fitness aspect of the watch has always been a huge focus, and we have been told by a source familiar with Apple’s plans that the company is looking to introduce a game-changing feature in an upcoming new version of the Apple Watch.
While there are countless uses for this new category of device that places a smartphone on your wrist, one of the most popular is fitness monitoring and tracking, an intense area of focus for Apple. There is most likely not a single consumer fitness product in the world that has had more internal testing, validation and investment than the Apple Watch, and this doesn’t seem to be slowing. Our source indicates that Apple has hired 200 PhDs in the past year as part of the company’s laser lock on improving and innovating in the health space with Apple Watch.
It has been rumored that Apple is interested in glucose monitoring, and it appears that the time may now be right. Previous rumors have stated that Apple might only be able to achieve this through a separate device that might complement the watch, however BGR has learned that this might not be accurate.
According to our source, Apple’s sights are now set on the epidemic of diabetes, and the company plans to introduce a game-changing glucose monitoring feature in an upcoming Apple Watch. An estimated 30 million people suffer from diabetes in the US alone, according to the American Diabetes Association, so Apple’s efforts could lead to a historic achievement in the world of health and fitness.
Currently, the only way to properly measure blood sugar levels is by using a blood sample, or by using a device that penetrates the skin. It’s uncomfortable, difficult and painful, and there are not presently any widely available noninvasive methods that are accurate. Apple isn’t stopping at just glucose monitoring, however.
Apple also plans to introduce interchangeable “smart watch bands” that add various functionality to the Apple Watch without added complexity, and without increasing the price of the watch itself. This could also mean that the glucose monitoring feature will be implemented as part of a smart band, rather than being built into the watch hardware.
A camera band that adds a camera to the watch is another possibility, or a band that contains a battery to extend battery life for wearers who want even more longevity, even though the Apple Watch’s battery performance is already class-leading. One can imagine the other types of smart bands that might be possible with this approach. This strategy might also make it easier for Apple to work with the FDA on approval of a medical device that the company could pre-announce, as opposed to letting a new Apple Watch leak months or even years in advance if it was to be submitted to the regulatory administration.
Another interesting quote from our source is that Apple has “identified the right part of the body and there’s so much more they can and intend to do with the watch.” While glucose monitoring would be a huge first step in Apple’s goal of continuing to make the Apple Watch indispensable, it’s not hard to imagine a near future where the watch is the hub of our digital and physical lives. It would monitor multiple aspects of the wearer’s health, but also replace smartphones when combined with some sort of augmented reality glasses or contact lenses, alongside AirPods in our ears.
Facebook’s Snapchat-style augmented reality face filters are coming to Instagram. Eight different filters will be available starting today, including a few different crowns, ones that make a person look like a koala or a rabbit, and another that sends math equations spinning around your head.
Instagram’s face filters will work whether you’re using the front or the back camera on your phone. You can find them by opening up the camera interface in the app and tapping the new icon in the bottom right corner. The filters can be used in any of Instagram’s shooting modes — photo, video, or even Boomerang. You can access them by downloading the new 10.21 update on the App Store or Google Play Store.
The idea of using augmented reality technology to map and apply animations to a user’s face was popularized by Snapchat, which bought Looksery — a company that pioneered the tech — back in 2015. Facebook responded by snatching up Belarusian startup MSQRD in early 2016, and the tech made its way into Facebook Stories earlier this year.
This is far from the first idea Facebook has lifted from Snap — adding Snapchat’s 24-hour Stories feature to Instagram is the real molten core of this entire drama — but augmented reality face filters were one of the last blockbuster Snapchat features that Instagram was missing. They are also just one small part of the much larger vision Facebook has for augmented reality, which the company laid out in detail at last month’s F8 conference. (Snap, of course, shares a similar vision.)
Instagram is also adding a few other features to the app today. Users will now be able to add hashtag “stickers” to a photo or video when posting it to their Story. Viewers will be able to tap these stickers to explore other media that’s been shared with the same hashtag, the same way you can already tag other users or apply geostickers. A new “rewind” video feature (also “inspired” by Snapchat) and an eraser have been added to the app as well.
Facebook is banning misleading uses of its Live video format. The company tells TechCrunch that it’s adding a section to its Live API Facebook Platform Policy that reads “Don’t use the API to publish only images (ex: don’t publish static, animated, or looping images), or to live-stream polls associated with unmoving or ambient broadcasts.”
Videos that violate the policy will have reduced visibility on Facebook, and publishers that repeatedly break the rule may have their access to Facebook Live restricted.
Facebook asked for viewer feedback, and heard that users don’t find these static images or graphics-only pools to be interesting or engaging Live content. In December, Facebook quietly barred graphics-only Live videos that used Likes or Reactions to get people to vote from the News Feed.
Now Facebook is taking the next step toward preserving the sanctity of the Live format.
It’s the urgency, unpredictability and on-screen action that draws people to Live videos and gets them to keep watching to see what happens next. If users grow accustomed to fake Live videos, they may watch all Live videos less, and be less inclined to open notifications about people or publishers they follow starting to broadcast.
We’ve reached out for clarifications about one prevalent type of misleading Live videos: countdowns. Since these are often filmed with a computer graphic over a looping background, videos like the New Year’s countdown above from BuzzFeed could potentially be qualify, but Facebook tells me that for now, countdowns of real-world happens that don’t loop are not prohibited. But if publishers who post thes keep getting negative feedback, their reach could shrink, and Facebook seays it will continue to monitor this trend.
Facebook has poured a ton of engineering and marketing resources into owning the verb, “to go Live.” Keeping the quality of these broadcasts high is critical to it recouping those costs over the long-term by being the the premier place to record and watch Live social content.
Facebook will bury links to low-quality websites and refuse to carry ads pointing to them in a News Feed algorithm change announced today. Facebook defines a “low-quality site” as one “containing little substantive content, and that is covered in disruptive, shocking or malicious ads.” This includes hosting pop-up and interstitial ads, adult ads or eye-catching but disgusting ads for products that fight fat or foot fungus.
The change could help Facebook fight fake news, as fakers are often financially motivated and blanket their false information articles in ads.
High-quality sites may see a slight boost in referral traffic, while crummy sites will see a decline as the update rolls out gradually over the coming months. Facebook tells me that the change will see it refuse an immaterial number of ad impressions that earned it negligible amounts of money, so it shouldn’t have a significant impact on Facebook’s revenue.
Facebook product manager for News Feed Greg Marra tells me Facebook made the decision based on surveys of users about what disturbed their News Feed experience. One pain point they commonly cited was links that push them to “misleading, sensational, spammy, or otherwise low-quality experiences . . .[including] sexual content, shocking content, and other things that are going to be really disruptive.”
Today’s change is important because if users don’t trust the content on the other side of the links and ads they see in News Feed, they’ll click them less. That could reduce Facebook’s advertising revenue and the power it derives from controlling referral traffic. Getting sent to a low-quality, shocking site from News Feed could also frustrate users and cause them to end their Facebook browsing session, depriving the social network of further ad views, engagement and content sharing.
Facebook previously tried to reduce the prevalence of links to low-quality sites with a2014 News Feed update that suppressed sites that people came back to News Feed immediately after viewing.
To implement the update, Marra tells me Facebook “reviewed hundreds of thousands of webpages,identifying which ones have low-quality content.” It used this data to train an AI system to constantly scan new links shared in News Feed, looking for ones that match the low-quality site training data set. It then demotes these sites and blocks them from buying Facebook ads.
The parameters Facebook used to classify sites as low-quality include:
A disproportionate volume of ads relative to content. This includes advertisements, and not legal obligations such as cookie policies or logins to private content, such as paywalls.
Pages that contain malicious or deceptive ads which include Prohibited Content as defined in our policies.
Use of pop-up ads or interstitial ads, which disrupt the user experience.
One of the most prominent hosts of these types of ads is Forbes, which shows an annoying full-screen interstitial ad before you can read its articles. When specifically asked if Forbes would come under the gun, Marra diplomatically admitted “Interstitial popover ads are one of things people are telling us are disruptive.
If Facebook can keep people confident that the links they click lead to quality content, it could continue to be the homepage of the internet.
Looking back at Steve Jobs’ tenure at Apple, it’s impossible to separate the role Microsoft and Bill Gates played. The companies helped pioneer the industry and define an era. The two CEOs partnered at various times, competed all the time, and challenged one another in ways that helped shape the landscape of technology. It’s a complex relationship – which you can witness in this amusing video compilation of Steve Jobs best quotes about Microsoft.
During the development of the Macintosh in the early 80s, Microsoft was an important ally. Apple needed groundbreaking softwares for it’s upcoming platform and Microsoft was one of the few companies developing for it. It was a crucial phase for Apple.
The strength of their relationship could be witnessed at an Internal Apple Event in Hawai where Steve Jobs introduced the Macintosh to a few Apple VIPs. Bill Gates sugarcoated the Mac and Steve Jobs loved every moment of it.
Steve Jobs and Bill Gates were so close at the time that according to a Guardian article, they even double-dated occasionally.
But all good things must end.
Steve Jobs had this dream where Apple would dominate the computer business and Microsoft would own the application-side of that business. The OS would naturally also by controlled by Apple.
But Bill Gates wasn’t blind. He understood that the Graphical User Interface was the future of computing. He also knew that it would quickly make its DOS operating system irrelevant and threatens Microsoft to become (just) a software company dependent of Apple. Bill Gates had bigger plans.
For years, Microsoft had engineers secretly copying the Macintosh OS and working on its own version of a Graphical OS: Windows. Not long after the Internal Event in Hawaii, Steve Jobs learned the crushing news. Microsoft wanted to compete with Apple; Bill Gates deceived him.
For the next 15 years, Apple would engage in a strange relationship with Microsoft. On one end, Microsoft was prying marketshare away from Apple, on the other, it was one of its biggest partner. Steve Jobs would soon leave Apple and create NeXT but would not succeed to make a dent in Microsoft’s dominance.
Along the way, Jobs often sparred with Microsoft, criticizing the company’s lack of creativity.
“The only problem with Microsoft is they just have no taste,” Jobs said in the 1996 public television documentary “Triumph of the Nerds.” “They have absolutely no taste. And I don’t mean that in a small way, I mean that in a big way, in the sense that they don’t think of original ideas, and they don’t bring much culture into their products.”
In a New York Times article that ran after the documentary aired, Jobs disclosed that he called Gates afterward to apologize. But only to a degree.
”I told him I believed every word of what I’d said but that I never should have said it in public,” Jobs told the Times. ”I wish him the best, I really do. I just think he and Microsoft are a bit narrow. He’d be a broader guy if he had dropped acid once or gone off to an ashram when he was younger.”
But if Steve was still bitter about Bill, why would he keep a letter of Bill next to his bed during his last moments?
Though to say…
What both men really thought of each others or what really happened behind the curtain will probably never be known. You have to hope that these titans truly shared mutual respects and eventually found grounds to appreciate each others. Bill Gates seems to have:
Bill Gates statement at the passing of Steve Jobs
I’m truly saddened to learn of Steve Jobs’ death. Melinda and I extend our sincere condolences to his family and friends, and to everyone Steve has touched through his work.
Steve and I first met nearly 30 years ago, and have been colleagues, competitors and friends over the course of more than half our lives.
The world rarely sees someone who has had the profound impact Steve has had, the effects of which will be felt for many generations to come.
For those of us lucky enough to get to work with him, it’s been an insanely great honor. I will miss Steve immensely.
Microsoft Build 2017 kicks off Wednesday morning in Seattle, a homecoming for the tech giant after years of holding its annual developers conference here in San Francisco.
That’s an apt rally-the-troops move considering the escalating battle between some of tech’s biggest companies in the arenas of artificial intelligence, home-assistant hardware and augmented reality.
Some 5,500 developers have heeded the Redmond, Wash., company’s call, helping Build sell out in a day. As Facebook, Apple and Google do with their big developer confabs, Microsoft will use the event to evangelize about its strategy while urging software pros to spend time developing much-needed apps for Microsoft’s ecosystem.
Chat-bots were the big story out of Build 2016 — CEO Satya Nadella pronounced the artificial-intelligence helpers “the new apps” — but famously fizzled out of the gate when, days before the conference, hackers turned Microsoft’s Tay bot into an epithet-spewing racist.
That said, expect bots to be back.
“Last year, Microsoft got ahead of Google, Facebook, Apple and even Amazon on the notions of bots and AI,” or artificial intelligence, says Patrick Moorhead, president of Moor Insights & Strategy. “Bots have ended up so far to be a non-event, but AI is on fire. Microsoft needs to provide updates and enhancements on both.”
Moorhead also anticipates updates on the next generation of Windows 10, which is due out in September, as well as details on Microsoft’s cash machine, its cloud computing platform Azure.
“Azure is the number two public cloud platform to Amazon (and its Amazon Web Services), and I’d like to see Microsoft to give clarity into their hybrid-cloud solution, Azure Stack,” he says, referring to a platform that helps businesses combine on-premise computing power with cloud computing.
Bank on Build 2017 being used to tout Microsoft’s efforts to bite off a piece of Amazon’s booming Echo market. The Alexa-powered home assistant speaker, which just got video capability, is being matched by a new offering from Samsung-owned Harman Kardon, which just unveiled its Invoke speaker, powered by Microsoft’s digital assistant Cortana.
After three years under Nadella, Microsoft’s stock price (MSFT) is at an all-time high anchored largely to consistent gains from Azure. But the most recent quarter revealed weaknesses, particularly in Surface hybrid tablets, which experienced a 26% sales drop.
Expect Nadella to kick things off with a keynote that continues to stress his mantra, “empowering everyone on the planet” through its suite of cloud-based productivity tools ranging from Office to recently purchase LinkedIn.
Build 2017 will undoubtedly also showcase some kind of gee-whiz demo related to the company’s groundbreaking mixed-reality headset, HoloLens, an untethered device that at present remains in the hands of developers only.
While a $300 mixed-reality headset from partners Acer and HP was just announced, don’t expect that experience to echo that of $3,000 HoloLens, which Microsoft is betting on as the next generation of computing device that rids us all of our desktops and laptops as our keyboards and monitors suddenly hover in front of us.
Build 2017 takes place at the Seattle Convention Center Wednesday through May 12, and some of its big sessions will be live streamed.
Pinterest today is adding a new feature to its Lens — its live camera search — that will help pick apart what’s in the image and make it easier to search specific parts of that image.
As you can see in the run-through above, what Pinterest calls Visual Guides based on object detection is another way that the company is trying to figure out what it is you are actually searching for when you point your camera at an object in the real world. Pinterest, at the end of the day, is trying to help you find a product and show you a bunch of other things related to that — but that also starts with the company figuring out what thing it is in that image that you are actually looking for a deeper dive.
Pinterest introduced Lens earlier this year as an attempt to continue expanding its tools for users to look at a product and immediately point them to an array of new ideas or products that might be closely or tangentially related. Much of Pinterest’s pitch revolves around helping its users discover new topics or products, like recipes or articles of clothing, that can then drive them to use Pinterest more and more. With more than 175 million users, Pinterest has worked to create a platform that has a discrete use case from Twitter, Facebook and Snap.
And that’s also the big pitch it gives for its marketing partners. Pinterest hopes to keep users’ attentions at all points of their buying experience. It starts with getting someone on the service and helping them discover a new topic, and then digging deeper into that topic. Pinterest keeps tabs on all that activity and helps its partners track those users throughout the buying cycle, eventually trying to point them to an end-product they may want to purchase. So instead of buying ads based on the hope for a conversion (in search, for example), brands and marketers can buy a full stack of ads that constitutes an entire potential customer’s lifetime.
To achieve that can be somewhat of a stiff technical problem as Pinterest looks more and more to remove any friction to pushing those users down deeper into its service. Instead of coming to the site directly, Pinterest is trying to integrate the experience deeper with reality, starting with the launch of Lenses. Pinterest is also adding a way to jump directly into Lens directly from the home screen with a force touch tap on the iPhone, much like other apps are trying to reduce that friction to getting into the core experience right away.
That’s going to require a lot of experimentation and tuning as Pinterest is basically trying to create a new kind of user behavior, and figuring out how to get a user’s full attention when they take a photo of a whole kitchen is one way to start. In addition to all this, and somewhat a sign of the company’s focus on those experiences, Pinterest’s head of discovery engineering Vanja Josifovski is being bumped up to the CTO role.
Ride-sharing service Uber today raised its Pennsylvania rates 5 cents per mile, a move it said would help drivers afford new protections.
Gus Fuldner, Uber’s head of safety & insurance, said a new partnership with OneBeacon and Aon gives drivers in Pennsylvania and seven other states the option of signing up for a plan that covers medical expenses resulting from a work-related accident with no deductible or co-pay and provides disability income replacement and survivor benefits.
The other states are Arizona, Delaware, Illinois, Massachusetts, Oklahoma, South Carolina and West Virginia.
Fuldner said the coverage applies the entire time a driver is logged onto the Uber app, but the premium of 3.75 cents a mile is charged only on miles traveled while on-trip and earning money carrying passengers. The maximum payout from a single accident is $1 million.
“This product is completely optional,” he said. “But in states where driver injury protection is available, we will raise fares across the board to help remove any financial barriers that may prevent drivers from choosing this option.”
Uber offers trip cost estimates but does not list its rate structure. However, the unaffiliated site uber-fare-estimator.com puts current Lancaster rates at $1.05 a mile for uberX and $1.80 a mile for uberXL. Base fares, booking fees and minimum charges also apply.
Facebook’s billion-plus users speak a plethora of languages, and right now, the social network supports translation of over 45 different tongues. That means that if you’re an English speaker confronted with German, or a French speaker seeing Spanish, you’ll see a link that says “See Translation.”
But Tuesday, Facebook announced that its machine learning experts have created a neural network that translates language up to nine times faster and more accurately than other current systems that use a standard method to translate text.
The scientists who developed the new system work at the social network’s FAIR group, which stands for Facebook A.I. Research.
“Neural networks are modeled after the human brain,” says Michael Auli, of FAIR, and a researcher behind the new system. One of the problems that a neural network can help solve is translating a sentence from one language to another, like French into English. This network could also be used to do tasks like summarize text, according to a blog item posted on Facebook about the research.
But there are multiple types of neural networks. The standard approach so far has been to use recurrent neural networks to translate text, which look at one word at a time and then predict what the output word in the new language should be. It learns the sentence as it reads it. But the Facebook researchers tapped a different technique, called a convolutional neural network, or CNN, which looks at words in groups instead of one at a time.
“It doesn’t go left to right,” Auli says, of their translator. “[It can] look at the data all at the same time.” For example, a convolutional neural network translator can look at the first five words of a sentence, while at the same time considering the second through sixth words, meaning the system works in parallel with itself.
Graham Neubig, an assistant professor at Carnegie Mellon University’s Language Technologies Institute, researches natural language processing and machine translation. He says that this isn’t the first time this kind of neural network has been used to translate text, but that this seems to be the best he’s ever seen it executed with a convolutional neural network.
“What this Facebook paper has basically showed—it’s revisiting convolutional neural networks, but this time they’ve actually made it really work very well,” he says.
Facebook isn’t yet saying how it plans to integrate the new technology with its consumer-facing product yet; that’s more the purview of a department there call the applied machine learning group. But in the meantime, they’ve released the tech publicly as open-source, so other coders can benefit from it
That’s a point that pleases Neubig. “If it’s fast and accurate,” he says, “it’ll be a great additional contribution to the field.”
Welcome to a brand new week – check out these must-read tech stories:
1. A Melbourne fintech is celebrating a $17 million funding round. Foreign exchange startup Airwallex got the attention of some serious players in global commerce for its latest capital injection, including card giant Mastercard, famous VC brand Sequoia China and Chinese web conglomerate Tencent. Read more here.
2. Facebook’s Australian revenue instantly multiplied 10 times after the laws came in last year to stop offshoring of local sales. The social media giant went from $33.6 million in Australian revenue in 2015 to $326.9 million for last year, after the Multinational Anti-Avoidance Law came into effect on January 1, 2016 and a company restructure. However, Facebook Australia saw its “costs of revenue” soar from $0 to $275.2 million, which meant an almost complete offsetting of the spectacular rise in local revenue. Read more on how the company made just $3 million in net profit after paying $3.27 million in tax.
3. Google Australia merely doubled its revenue in 2016 as a result of the Multinational Anti-Avoidance Law, but will be fighting an amended tax assessment that the Australian Taxation Office issued it after year end. The internet giant’s Australian arm racked up $1.14 billion in revenue and $104.7 million net profit, which it attributed partially to the restructure and partially to actual growth in operations. Read more on its results.
4. A debt collection startup has scored $1 million in seed funding from Westpac’s Reinventure. InDebted, which is already in operation in Australia, will use the cash to expand overseas to use technology in an industry that’s been slow to move out of pen and paper processes. Read more on the other angel investors.
5. IBM shipped malware on USB sticks sent out to customers. The tech giant has advised that USB drives sent with Storwize flash and hybrid corporate data storage systems should be destroyed, after it was discovered some software on the drives contained malware supposedly served up from a North Korean website.
Imagine you’re a hearing impaired person who wants to hire a sign language interpreter. The process is antiquated and lengthy. You have to send a fax to a local municipal government to make a reservation two weeks in advance, and officials then look for an interpreter whose schedule matches yours. Once they find one, you’ll get a reply by fax.
Under this system, it is impossible to get an interpreter right away to deal with urgent matters.
But Junto Ohki, a young entrepreneur, has not only relieved many people of such anxieties, but also helped change the widespread belief that sign language interpreting is something that should be provided by the public sector as welfare.
Ohki founded his company, ShuR Co., when he was a sophomore at Keio University in 2008. The company now runs a Skype-based sign language interpreting business. ShuR also created the world’s first online sign language dictionary called SLinto.
With ShuR, users can call the company, based in Tokyo’s Shinagawa Ward, from anywhere to ask to use the service.
“I was studying IT business (at university). Though Skype wasn’t that popular at that time, I thought if we use this technology, we can remotely provide sign language interpretation for people with hearing disabilities without actually dispatching interpreters,” Ohki said in a recent interview with The Japan Times. “With this, I thought I could solve many problems that my friends with hearing disabilities encountered in their lives.”
With ShuR’s service, hearing impaired people can see a doctor when they get sick or enjoy simple things like shopping.
The 29-year-old Gunma native still remembers clearly when his company offered free sign language interpretation as a trial run in 2012. As an interpreter himself, Ohki took a Skype call from an elderly man who asked, “Is it true that if I make a phone call, sushi will be delivered to my house?”
The man, who had a hearing disability, knew about such delivery services but had been unable to use a phone himself.
“Yes it’s true,” Ohki said in sign language, going on to order sushi for the man.
Twenty minutes later, Ohki received a call from the same man. When he took the call, the first thing that appeared on the screen was a plate of sushi. The man then appeared with a big smile. “What you said was true. Sushi arrived!” he told Ohki.
This kind of experience became a driving force for Ohki to develop the service further.
Ohki’s firm does not charge individual users. Instead, it sells the service to corporations that see the need for better communication with customers and employees, including those with hearing disabilities.
But it was tough to make companies understand why they should pay for the service, Ohki said.
“Most people think that sign language belongs to the field of welfare, and interpretation should be done by volunteers,” he said.
It took some time for Ohki to convince businesses that they would benefit from the service.
But thanks to his efforts, firms like JR East and Kao Corp. use ShuR’s services. For example, customers can use tablets at JR train stations connected to ShuR’s interpreters to ask how to buy a ticket or for directions.
“They were already offering multilingual services, so it was just like adding another menu item to accommodate the hearing impaired,” Ohki said.
Meanwhile, cosmetics and household goods manufacturer Kao’s customer centers use the service to respond to various questions from customers about their products.
About 400 establishments nationwide, including hospitals, shopping centers and customer support centers, are now equipped with tablets connected to the interpreters. Some companies also use the service to have internal meetings with employees who have hearing disabilities, Ohki said.
Until he started university, Ohki, who is not hearing impaired himself, had no experience with sign language or people with hearing disabilities. The only time he saw sign language was on an NHK program when he was a junior high school student.
An avid photographer, in high school he dreamed of becoming a photojournalist who reports from war zones. He even went to the United States to study English, hoping that acquiring photography and language skills would lead to becoming a journalist. While he was chosen as a finalist in a nationwide photo contest for high school students, he didn’t take the top prize. The loss prompted him to rethink his career goals and diversify his areas of study at university.
At Keio University, an old memory came back that would set him on a new course. He remembered the vivid impression from the sign language he saw on NHK and how he thought it was such a beautiful language. Though he wanted to join a sign language club, there was no such club at his university. In the summer of his first year, a female friend asked him to create a sign language club with her.
The pair, who were truly beginners of the language, founded the club and started learning to sign.
Three months later, he was asked through an acquaintance to appear on NHK’s “Kohaku Uta Gassen” (“Year-end Song Festival“) program. Popular singer Yo Hitoto, a Keio University graduate scheduled to sing on the program, was looking for someone who could perform her song’s lyrics in sign language, and Ohki’s newly established club was called in.
“I don’t think she knew we had only studied sign language for three months. But when I was asked to join her, I recklessly said, ‘Yes, we can do it,’ ” Ohki said with a smile.
After intensive training, Ohki and his club mates made a successful debut on NHK’s popular year-end program.
Being part of that show had a tremendous ripple effect. The club was invited to many places across Japan to demonstrate and teach sign language. TV stations and newspapers interviewed him and other club members.
“I thought about the reason why we captured media attention and I came to the conclusion that there weren’t enough entertainment programs for the hearing impaired,” Ohki said.
Ohki then began creating an online travel program with hearing impaired people. As they traveled together for the program, he discovered how difficult the life of people with disabilities could be.
“They can’t even call an ambulance or go to see a doctor because they can’t talk even if they get sick,” he said, adding that learning about their lives led him to establish his company in his second year of university.
As for the online dictionary SLinto, Ohki said the database, accessible to anyone, aims to make the process of learning sign language easier.
“When I was studying sign language, I had a hard time finding the meaning of signs,” Ohki said. “You can Google the word ‘dog’ if you want to know how to say it in sign language, but when you see a hand motion for a specific word, you can’t look up what it means. It’s like having a Japanese-to-English dictionary, but not having an English-to-Japanese dictionary.”
Using a special keyboard displayed on a computer screen, users of SLinto can choose a hand motion for a sign, such as placing an index finger in front of the stomach, and then various video clips similar to those movements will appear. Users then look for the motion they want to know about from among the video clips.
Ohki said they were currently developing an American version of the dictionary and discussing how to promote it in the U.S., which has a huge sign language-related market. The free service generates revenue from ads displayed at the bottom of the screen, he said. He has already acquired a patent for the special on-screen keyboard.
“There aren’t many services that originated in Japan that have spread around the world,” he said. “I would like to make ShuR’s service the world standard for sign language.”
Now, Ohki is looking ahead to the Tokyo Olympics and Paralympics in 2020. “More and more people with hearing disabilities from overseas will be visiting Japan for the Olympics,” he said. “I would like to create an environment where such people can stay in Japan comfortably without feeling insecure about what to do if they get sick.”
Microsoft is planning to allow developers to add bots to the company’s Bing search results. The software giant has been testing this functionality for at least a month, and previous reports revealed the testing was mainly limited to Seattle. Sources familiar with Microsoft’s plans tell The Verge the company is ready to expand this further at its Build developer event next week.
Microsoft published its Build schedule this week, and one particular session reveals “you can add your custom bots to Bing.” The existing bot test on Bing.com can be found by searching for a restaurant like Monsoon in Seattle. A new option lets you chat with the restaurant through a Skype bot, and you can ask questions like opening hours or parking information.
Microsoft originally launched its bots platform at Build last year, and custom bots on Bing.com will be a new way to extend these further outside of Skype and Microsoft’s own bots platform. We’re expecting to hear a lot more about Bing’s bots at Build next week.
After former President Obama reopened America’s diplomatic relations with Cuba, businesses started looking for opportunities to make inroads to the island nation. Google was one of these, with Obama himself announcing it would come to help set up WiFi and broadband access there. Cuba’s national telecom ETECSA officially inked a deal with Google back in December, and today, they finally switched on the service, making the search giant the first foreign internet live on the island.
To be fair, Google already had a headstart when it made Chrome availablein Cuba back in 2014. The servers Google switched on today are part of a the Google Global Cache (GGC), a global network that locally stores popular content, like viral videos, for quick access. Material stored in-country will load much quicker than Cuba’s existing setup: Piping internet in through a submarine cable connected to Venezuela. Many Cubans can only access the web through 240 public access WiFi spots scattered through the country, according to Buzzfeed. While this won’t bring Cuban internet near as fast as American access, it’s still a huge step forward.
Self-driving vehicles have yet to hit the road in a major way, but Amazon already is exploring the technology’s potential to change how your packages are delivered.
Amazon is the nation’s largest online retailer, and its decisions not only turn heads but influence the entire retail and shipping industries, analysts say. That means any foray into the self-driving arena – whether as a developer or customer – could have a significant effect on the technology’s adoption.
Amazon has assigned a dozen employees to determine how it can use the technology as part of its business, the Wall Street Journal reported Monday. It’s unclear what shape Amazon’s efforts will take or how far along they might be, although the company has no plans to create its own vehicles, according to the report.
Nevertheless, the Amazon group offers an early indication that big companies are preparing for the technology’s impact.
Transportation experts anticipate that self-driving cars will fundamentally alter the way people get around and the way companies ship goods, changes that stand to disrupt entire industries and leave millions of professional drivers without jobs. The forthcoming shift has attracted the money and attention of the biggest names in the technology and automotive industries, including Apple, Uber, Google, Ford, General Motors and Tesla, among others.
In particular, the technology could make long-haul shipping cheaper and faster because, unlike human drivers, machines do not command a salary or require down time. That would be important to Amazon, whose shipping costs continue to climb as the company sells more products and ships them faster, according to its annual report. Amazon even invested in its own fleet of trucks in December 2015 to give the company greater control over distribution.
If Amazon adopts self-driving technology, it may push others to do the same.
“When Amazon sneezes, everyone wakes up,” said Satish Jindel, president of SJ Consulting Group, a transportation and logistics advisory firm.
The company said it shipped more than 1 billion items during the 2016 holiday season.
An Amazon spokeswoman declined a request for an interview, citing a “long-standing practice of not commenting on rumors and speculation.” The company’s chief executive, Jeffrey P. Bezos, owns The Washington Post.
Amazon has become something of a pioneer in home delivery, in part by setting the standard for how quickly purchases arrive on your doorstep. The company has begun using aerial drones in an effort to deliver goods more quickly, completing its first successful flight to a customer in the United Kingdom in December. Like self-driving vehicles, drones will need to overcome regulatory hurdles before they’re widely deployed.
In its warehouses, Amazon has used thousands of robots that pull items from shelves and pack them. Last summer, Deutsche Bank analysts found the robots reduced the time to fulfill an order from more than an hour to 15 minutes, according to business news site Quartz. They also saved Amazon about $22 million per warehouse. Amazon acquired Kiva, the company that makes the robots, in 2012 for $775 million.
Waymo—or, the company formerly known as Google’s self-driving car project—announced Tuesday that it plans to sign up hundreds of households living in and around the Phoenix, Arizona, area for a trial that will give them free, on-demand access to self-driving cars.
“Rather than offering people one or two rides, the goal of this program is to give participants access to our fleet every day, at any time, to go anywhere within an area that’s about twice the size of San Francisco,” John Krafcik, CEO of Waymo, wrote in a post on Medium.
The fact that Krafcik outlined how much of the greater Phoenix area will be open to riders is significant, suggesting that Waymo has mapped it in great detail and is confident that its cars will perform well there. This is similar to the approach Uber took when it launched its self-driving taxi program in Pittsburgh last year.
Uber’s Pittsburgh experiment showcased a technology that was a long way from self-sufficient (see “What to Know Before You Get In a Self-driving Car”), and since then the ride-hailing giant’s autonomous vehicle operations have had a rough ride—including being accused by Waymo of stealing its lidar technology.
Waymo, meanwhile, appears to believe its fleet of self-driving Chrysler Pacifica Hybrid minivans and Lexus RX450h SUVs is up to the challenge of ferrying families to and from work, soccer practice, and on errands. While the company makes clear that each car will come with a human test driver, Krafcik said the purpose of the trial is to learn more about how people use Waymo’s vehicles—where they go with them, how they interact with them during rides, and so on.
This could be a sign that the technology is maturing to the point that Waymo is becoming more concerned with how to make an actual business out of its cars (which was, after all, the point of spinning the company out of Google in the first place). There is also plenty of pressure from a growing list of competitors to keep pushing forward.
Regardless of the motivation, the trial is likely to provide a trove of data on what regular people do with autonomous vehicles when given the opportunity. And if Waymo’s years of experience in testing self-driving cars is any indication, there are bound to be a lot of unexpected results.
Open up the photo app on your phone and search “dog,” and all the pictures you have of dogs will come up. This was no easy feat. Your phone knows what a dog “looks” like.
This and other modern-day marvels are the result of machine learning. These are programs that comb through millions of pieces of data and start making correlations and predictions about the world. The appeal of these programs is immense: These machines can use cold, hard data to make decisions that are sometimes more accurate than a human’s.
But know: Machine learning has a dark side. “Many people think machines are not biased,” Princeton computer scientist Aylin Caliskan says. “But machines are trained on human data. And humans are biased.”
Computers learn how to be racist, sexist, and prejudiced in a similar way that a child does, Caliskan explains: from their creators.
We think artificial intelligence is impartial. Often, it’s not.
Nearly all new consumer technologies use machine learning in some way. Like Google Translate: No person instructed the software to learn how to translate Greek to French and then to English. It combed through countless reams of text and learned on its own. In other cases, machine learning programs make predictions about which résumés are likely to yield successful job candidates, or how a patient will respond to a particular drug.
Machine learning is a program that sifts through billions of data points to solve problems (such as “can you identify the animal in the photo”), but it doesn’t always make clear how it has solved the problem. And it’s increasingly clear these programs can develop biases and stereotypes without us noticing.
Last May, ProPublica published an investigation on a machine learning program that courts use to predict who is likely to commit another crime after being booked systematically. The reporters found that the software rated black people at a higher risk than whites.
“Scores like this — known as risk assessments — are increasingly common in courtrooms across the nation,” ProPublica explained. “They are used to inform decisions about who can be set free at every stage of the criminal justice system, from assigning bond amounts … to even more fundamental decisions about defendants’ freedom.”
The program learned about who is most likely to end up in jail from real-world incarceration data. And historically, the real-world criminal justice system has been unfair to black Americans.
This story reveals a deep irony about machine learning. The appeal of these systems is they can make impartial decisions, free of human bias. “If computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long,” ProPublica wrote.
But what happened was that machine learning programs perpetuated our biases on a large scale. So instead of a judge being prejudiced against African Americans, it was a robot.
It’s stories like the ProPublica investigation that led Caliskan to research this problem. As a female computer scientist who was routinely the only woman in her graduate school classes, she’s sensitive to this subject.
Caliskan has seen bias creep into machine learning in often subtle ways — for instance, in Google Translate.
Turkish, one of her native languages, has no gender pronouns. But when she uses Google Translate on Turkish phrases, it “always ends up as ‘he’s a doctor’ in a gendered language.” The Turkish sentence didn’t say whether the doctor was male or female. The computer just assumed if you’re talking about a doctor, it’s a man.
How robots learn implicit bias
Recently, Caliskan and colleagues published a paper in Science, that finds as a computer teaches itself English, it becomes prejudiced against black Americans and women.
Basically, they used a common machine learning program to crawl through the internet, look at 840 billion words, and teach itself the definitions of those words. The program accomplishes this by looking for how often certain words appear in the same sentence. Take the word “bottle.” The computer begins to understand what the word means by noticing it occurs more frequently alongside the word “container,” and also near words that connote liquids like “water” or “milk.”
This idea to teach robots English actually comes from cognitive science and its understanding of how children learn language. How frequently two words appear together is the first clue we get to deciphering their meaning.
Once the computer amassed its vocabulary, Caliskan ran it through a version of the implicit association test.
In humans, the IAT is meant to undercover subtle biases in the brain by seeing how long it takes people to associate words. A person might quickly connect the words “male” and “engineer.” But if a person lags on associating “woman” and “engineer,” it’s a demonstration that those two terms are not closely associated in the mind, implying bias. (There are some reliability issues with the IAT in humans, which you can read about here.)
Here, instead at looking at the lag time, Caliskan looked at how closely the computer thought two terms were related. She found that African-American names in the program were less associated with the word “pleasant” than white names. And female names were more associated with words relating to family than male names. (In a weird way, the IAT might be better suited for use on computer programs than for humans, because humans answer its questions inconsistently, while a computer will yield the same answer every single time.)
Like a child, a computer builds its vocabulary through how often terms appear together. On the internet, African-American names are more likely to be surrounded by words that connote unpleasantness. That’s not because African Americans are unpleasant. It’s because people on the internet say awful things. And it leaves an impression on our young AI.
This is as much as a problem as you think.
The consequences of racist, sexist AI
Increasingly, Caliskan says, job recruiters are relying on machine learning programs to take a first pass at résumés. And if left unchecked, the programs can learn and act upon gender stereotypes in their decision-making.
“Let’s say a man is applying for a nurse position; he might be found less fit for that position if the machine is just making its own decisions,” she says. “And this might be the same for a women applying for a software developer or programmer position. … Almost all of these programs are not open source, and we’re not able to see what’s exactly going on. So we have a big responsibility about trying to uncover if they are being unfair or biased.”
And that will be a challenge in the future. Already AI is making its way into the health care system, helping doctors find the right course of treatment for their patients. (There’s early research on whether it can help predict mental health crises.)
But health data, too, is filled with historical bias. It’s long been known that women get surgery at lower rates than men. (One reason is that women, as primary caregivers, have fewer people to take care of them post-surgery.)
Might AI then recommend surgery at a lower rate for women? It’s something to watch out for.
So are these programs useless?
Inevitably, machine learning programs are going to encounter historical patterns that reflect racial or gender bias. And it can be hard to draw the line between what is bias and what is just a fact about the world.
Machine learning programs will pick up on the fact that most nurses throughout history have been women. They’ll realize most computer programmers are male. “We’re not suggesting you should remove this information,” Caliskan says. It might actually break the software completely.
Caliskan thinks there need to be more safeguards. Humans using these programs need to constantly ask, “Why am I getting these results?” and check the output of these programs for bias. They need to think hard on whether the data they are combing is reflective of historical prejudices. Caliskan admits the best practices of how to combat bias in AI is still being worked out. “It requires a long-term research agenda for computer scientists, ethicist, sociologists, and psychologists,” she says.
Counterfeit products and merchant account breaches aren’t the only problems Amazon has to deal with on its e-commerce platform. Many of Amazon’s third-party sellers assert that most of the “just launched” merchants on Amazon Marketplace are peddling products that simply don’t exist, offering a low price to entice naive buyers.
Legitimate sellers are immensely frustrated by what they see as fraudulent competition — some even think it’s an attack on Amazon, perhaps Chinese corporate espionage. The direct financial impact on Amazon is hard to judge, let alone the damage to the website’s brand, but sellers’ outrage is very clear.
Amazon denies that “just launched” scammers have a significant impact on their platform. “Amazon has zero tolerance for fraud,” a spokesperson emailed Inc. “We withhold payment to sellers until we are confident that our customers have received the products and services they ordered. In the event that sellers do not comply with the terms and conditions they’ve agreed to, we work quickly to take action on behalf of customers.” Amazon also noted it works with law enforcement to combat fraud.
Here’s how the scam supposedly works: Someone signs up for an Amazon seller account using fake information. They use software to identify the most prominent listings, then say they are also offering those products for sale. (Occasionally the fake seller will build up a couple of months of legitimate activity first, or hijack an established seller account.) People buy from the fake listings, and are told the product will be shipped in a couple of weeks. By the time they realize the product isn’t coming, the fake seller has already made off with the money, and Amazon ends up eating the refund cost.
In March, video game website Polygon noted a rash of unrealistically low-priced Nintendo Switches supposedly for sale on Amazon. “It seems as if Amazon is being gripped by sellers who sign up with fresh accounts, list popular items below market price with a longer than usual shipping range, and then mark the item as shipped once they receive your money. By the time the shipping date has passed and you file for a return, the seller is long gone.”
Polygon’s account is slightly imprecise: Amazon pays its sellers roughly every 14 days, and it doesn’t disburse money to sellers until they confirm that an order has been shipped. But crucially, sellers don’t have to prove the item was actually received. It’s not clear whether Amazon verifies the tracking numbers that sellers provide, but even if they do, a seller could pay for a shipping label and receive a valid tracking number without ever mailing a product.
A Forbes story published in January noted the same pattern that Polygon did. “While Amazon admitted the fraud and backed up these purchases with their A-to-Z Guarantee, it still left me empty-handed on Christmas morning — a state of affairs in which I was not alone,” Wade Shepard wrote.
Posts about scammers are frequent and popular on the Amazon Seller Forums. One person wrote, “Ever[y] single thread on the forums should be this issue until Amazon responds or fixes it. The fact that I have to come here in desperation is unacceptable.”
In numerous discussions with Inc., as well as in forum posts, sellers said Amazon’s buyer-protection guarantee means the company is eating refund costs once customers realize they’ve been scammed. Other sellers have speculated the scammers make money by selling customer information. As with the issue of counterfeits, virtually all sellers are upset by the perceived lack of communication and transparency from Amazon. The majority of the sellers who spoke with Inc. requested anonymity, citing fears the company would retaliate.
“I see these sellers coming and going, coming and going, all day long,” said Fred Ruckel. Ruckel invented a popular cat toy called the Ripple Rug, and he’s dealt with problems on the e-commerce platform before. “All day long there’s sellers that are ‘just launched,’ then gone, ‘just launched,’ then gone. It’s incredible that… their system is not blocking people.”
Ruckel suggested, “All they have to do is say that when you sign up to be a seller on the Amazon Marketplace, you have to give, let’s say, a $1,000 deposit, and that deposit will stay on for six months — or until you’ve shipped X amount of orders and satisfied X amount of customers, so that we know you are a real seller.”
A user on the Amazon Seller Forums wrote, “It is my hope that Amazon impose[s] a policy to limit the number of products offered by new sellers during an automatic vetting period to verify identity and banking.” The user continued, “I cannot believe Amazon is not doing everything they can eliminate these sellers, it must be costing them millions. I think Amazon should be more proactively communicating with their sellers so everyone can be aware of these fraudulent practices and more sellers could assist in removing these scammers.”
German online fashion platform, Zalando, achieved revenue of €3.6bn last year, while in Ireland, start-ups, including TV fashion stylist, Sonya Lennon’s FrockAdvisor, have been joined by newcomers, Outfitable and Hello Bezlo, in the fashion tech arena.
But what is fashion tech? The term seems to fit any online fashion business, from e-commerce to v-commerce, to smart wearables and clothes and accessories with built-in functionality.
On the e-commerce side, there are platforms to connect boutiques with customers like Farfetch and Frock Advisor, while v-commerce company, Trillenium, creates virtual stores for brands.
Smart wearables range from FitBit to Knomo and high-end jewellery brand, Vinaya, which has Bluetooth functionality.
Fashion and tech are being combined on the catwalk using innovative fabrics, LED lights, and conductive thread.
On the perimeters are inventions that combine medical and environmental functionality with clothing.
For example, there is the Foxleaf drug-dispensing bra and Dahea Sun’s Rain Palette line, which can detect air quality. Lennon co-founded FrockAdvisor with fellow fashion designer, Brendan Courtney. It’s a platform that connects independent fashion retailers with fashion-conscious customers.
She is sceptical about the term being used to describe every online fashion business.
“For me, it’s just a little bit gimmicky,” she says, adding that “there’s hard tech and soft tech. Any fashion business is going to have to harness technology to survive and thrive”.
FrockAdvisor is not a virtual customer service. It’s real customer service through a digital medium, she says. “Every business has to be led by a digital strategy. In a way, technology is a medium by which we do business, rather than a solution in itself.” Dima Kfouri recently pitched her start-up, Outfitable, at NDRC, as a response to her frustration at online clothing retailers’ lack of uniformity of sizes.
She plans to harness technology to develop her brand. “What we hope to do is apply machine-learning technology to create a personalised feed, the equivalent of a Netflix experience.”
Clodagh Connell, of Irish children’s wear brand, Hello Bezlo, views fashion tech as “fashion being enhanced by technology”, how fabrics are produced to match the function and experience of wearing clothes and accessories. While researching her idea for a fashion brand that encourages young girls to get involved in science, technology, engineering and mathematics, Ms Connell became fascinated with fashion tech. Hello Bezlo recently co-hosted Code Couture — a workshop that combined fashion and technology — with coding club network, CoderDojo and Zalando.
Ms Kfouri attended the Dublin Tech Summit earlier this year and was happy to see a full stage devoted to fashion. She says fashion tech is much more than smartwatches.
“You’ve got things like AI and Chatbox and user-messaging, for a more customized and tailored service. You’ve got virtually augmented reality, virtual changing rooms with interactive mirrors.” She says that the two industries have a lot in common. “The fashion space is creative, but every fashion line’s bottom line is going to be based on profit, which is ultimately based on transactions.”
Ms Connell says she’s been “blown away” by how the industry has evolved in the last year. “Designers like Iris van Herpen have really paved the way forward. And because this space is so new, there is so much room for innovation and new start-ups.” So what’s next for the industry?
Fashion tech and advancements in technology are going to make the industry more sustainable, Ms Kfouri believes. “Stella McCartney is one of the leaders in this area. She’s using tech in amazing ways and she’s a big believer in sustainable fashion.”
Ms Kfouri foresees the 3D printer being front and center. She says that using the printer, “along with open-source platforms, you’ll be able to create fashion right then and there. It’s already been done with jewelry and accessories.”
3D printing is great if you need to create something made of plastic or even metal or ceramic out of thin air. But what if you want something fuzzier and warmer? Something, like say, a hand-knit scarf or sweater?
Enter Kniterate, a “digital knitting machine,” that makes it easy to take digital designs and automatically knit them into wearable fabrics at the push of a button. Simpler designs like scarves and ties can be knitted wholly by the Kniterate, while more complex pieces like dresses or sweaters will require a bit of assembly after the machine has done its work. The company is also developing an app to make it easy to design new patterns, add images and text, and customize the type of stitches used.
According to the Kickstarter page, Kniterate hopes to bridge the gap between traditional home knitting machines (apparently a thing that’s been around since the ‘80s — who knew?), which are cheaper but complicated and tricky to use, and more expensive industrial machines. That said, a single Kniterate costs $4,699 on Kickstarter, with only 125 units being offering through crowdfunding. And if you miss that, you’ll be stuck paying $7,499 at retail, which certainly stretches the price point for “consumer” a bit.
Obviously, given the price and the fact that Kniterate is an extremely complex piece of hardware and software from a first time company, it’s worth doing your own research before putting up the cash. The first Kniterate units are expected to ship in April 2018.
Over the years, Google trained computer systems to keep copyrighted content and pornography off its YouTube service. But after seeing ads from Coca-Cola, Procter & Gamble and Wal-Mart appear next to racist, anti-Semitic or terrorist videos, its engineers realized their computer models had a blind spot: They did not understand context.
Now teaching computers to understand what humans can readily grasp may be the key to calming fears among big-spending advertisers that their ads have been appearing alongside videos from extremist groups and other offensive messages.
Google engineers, product managers and policy wonks are trying to train computers to grasp the nuances of what makes certain videos objectionable. Advertisers may tolerate use of a racial epithet in a hip-hop video, for example, but may be horrified to see it used in a video from a racist skinhead group.
That ads bought by well-known companies can occasionally appear next to offensive videos has long been considered a nuisance to YouTube’s business. But the issue has gained urgency in recent weeks, as The Times of London and other outlets have written about brands that inadvertently fund extremists through automated advertising — a byproduct of a system in which YouTube shares a portion of ad sales with the creators of the content those ads appear against.
This glitch in the company’s giant, automated process turned into a public-relations nightmare. Companies like AT&T and Johnson & Johnson said they would pull their ads from YouTube, as well as Google’s display advertising business, until they could get assurances that such placement would not happen again.
Consumers watch more than a billion hours on YouTube every day, making it the dominant video platform on the internet and an obvious beneficiary as advertising money moves online from television. But the recent problems opened Google to criticism that it was not doing enough to look out for advertisers. It is a significant problem for a multi-billion-dollar company that still gets most of its revenue through advertising.
“We take this as seriously as we’ve ever taken a problem,” Philipp Schindler, Google’s chief business officer, said in an interview last week. “We’ve been in emergency mode.”
Over the last two weeks, Google has changed what types of videos can carry advertising, barring ads from appearing with hate speech or discriminatory content.
In addition, Google is simplifying how advertisers can exclude specific sites, channels and videos across YouTube and Google’s display network. It is allowing brands to fine-tune the types of content they want to avoid, such as “sexually suggestive” or “sensational/bizarre” videos.
It is also putting in more stringent safety standards by default, so an advertiser must choose to place ads next to more provocative content. Google created an expedited way to alert it when ads appear next to offensive content.
The Silicon Valley giant is trying to reassure companies like Unilever, the world’s second-largest advertiser, with a portfolio of consumer brands like Dove and Ben & Jerry’s. As other brands started fleeing YouTube, Unilever discovered three instances in which its brands appeared on objectionable YouTube channels.
The sponge could address clean-up challenges often encountered during oil spills, like those seen after the Deepwater Horizon spill, where oil forms a plume and drifts below the surface of the water.
“The Oleo Sponge offers a set of possibilities that, as far as we know, are unprecedented,” says co-inventor Seth Darling, a scientist with Argonne National Laboratory’s Center for Nanoscale Materials and a fellow of the University of Chicago’s Institute for Molecular Engineering. “We already have a library of molecules that can grab oil, but the problem is how to get them into a useful structure and bind them there permanently.”
How Oleo Sponge works
The scientists started out with common polyurethane foam, used in everything from furniture cushions to home insulation. This foam has lots of nooks and crannies, like an English muffin, which could provide ample surface area to grab oil; but they needed to give the foam a new surface chemistry in order to firmly attach the oil-loving molecules.
Previously, Darling and fellow Argonne chemist Jeff Elam had developed a technique called sequential infiltration synthesis, or SIS, which can be used to infuse hard metal oxide atoms within complicated nanostructures.
After some trial and error, they found a way to adapt the technique to grow an extremely thin layer of metal oxide “primer” near the foam’s interior surfaces. This serves as the perfect glue for attaching the oil-loving molecules, which are deposited in a second step; they hold onto the metal oxide layer with one end and reach out to grab oil molecules with the other.The result is Oleo Sponge, a block of foam that easily adsorbs oil from the water. The material, which looks a bit like an outdoor seat cushion, can be wrung out to be reused—and the oil itself recovered.
Tested over and over in a giant seawater tank
At tests at a giant seawater tank in New Jersey called Ohmsett, the National Oil Spill Response Research and Renewable Energy Test Facility, the Oleo Sponge successfully collected diesel and crude oil from both below and on the water surface.
“The material is extremely sturdy. We’ve run dozens to hundreds of tests, wringing it out each time, and we have yet to see it break down at all,” Darling says.
Oleo Sponge could potentially also be used routinely to clean harbors and ports, where diesel and oil tend to accumulate from ship traffic, says John Harvey, a business development executive with Argonne’s Technology Development and Commercialization division.
“The technique offers enormous flexibility, and can be adapted to other types of cleanup besides oil in seawater. You could attach a different molecule to grab any specific substance you need,” Elam says.
The team is actively looking to commercialize the material, Harvey says. Those interested in licensing the technology or collaborating with the laboratory on further development may contact firstname.lastname@example.org.
Source: University of Chicago/Argonne National Laboratory
3D printing is incorporating itself into our clothes, our medicine, and, now, even our homes. Apis Cor., a company dedicated to building the world with printing has built its first ever 3D-printed home in Stupino town, a region near Moscow, Russia.
Construction took only 24 freezing hours during December, 2016, through temperatures of -35°C (-31°F). The home, equipped with a living room, kitchen, bathroom, and a hallway, was made on-site, a world’s first for 3D-printed building constructed in that amount of time.
The total cost of construction for the 38-square-meter (409-square-foot) home was $10,000, including the expenses of work, materials for the construction, and furnishing. That’s $266 per square meter ($81 per square foot), but the company is confident that a square house with a simpler design and averagely priced material would cost only $223 per square meter ($68 per square foot).
The construction of the home was made possible by a mobile 3D printer. Once the mobile printer had completed the walls, it was removed with a crane manipulator to allow manual workers to come in and finish the job.
Google’s plans for its futuristic Mountain View campus changed yet again when it swapped lands with LinkedIn last year. The tech titan has recently submitted its updated proposal to City of Mountain View, and its computer renders show us what Google’s new vision looks like. The canopy you see in the center of the image above will be located outside the existing Googleplex and will have the capability to regulate climate, air quality and sound indoors.
As 9to5Google said, the company also wants its campus to become a “destination for the local community,” so it envisions a place with lots of green spaces open not just to employees, but also to the public. It’s planning to build small parks throughout and a plaza with food stalls and the like. To make sure all the activity doesn’t distract Googlers, employee offices will be located on the second floor of the new building.
You can see the entirety of the big G’s plans in the documents (PDF) it submitted. Mountain View’s authorities will have to approve the proposal before construction begins. If and when it does, the company expects the new campus to be completed within 30 months.
Facebook has a new plan to get more of Africa online: Fiber optic cables. The social giant on Monday announced plans to lay nearly 500 miles of fiber cable in Uganda by the end of the year, infrastructure that Facebook believes will provide internet access for more than three million people.
Facebook is not, however, providing its own wireless network. The company is partnering with Airtel and BCS to provide the actual internet service, and says the fiber will offer more support for “mobile operators’ base stations.” The company also says that it’s “open” to working with other network providers down the line.
All three organizations are making some kind of financial commitment to the project, according to a person familiar with the deal, though it’s unclear who is paying for what.
The move to dig up ground and lay physical fiber cables is the latest in a string of efforts Facebook has made over the past two years to get more people online. Facebook’s mission is to connect everyone in the world with its social network, but that’s hard to do if significant portions of the world don’t have internet access.
CEO Mark Zuckerberg has been trying to fix that, both with infrastructure and with efforts to lower the cost of wireless data.
In India, for example, Facebook tried to make some internet services free for some users, including its own social network. Indian regulators pushed back because of net neutrality concerns, and the free service was ultimately blocked.
In 2015, Facebook started building solar-powered drones to fly high overhead and beam internet to rural places down below. The first test flight for one of these drones was completed in June, though it crashed upon landing. (Even so, the drone approach is, as far as we know, still very much part of the company’s longterm plans.)
But now Facebook is at it again, this time with fiber cables. It’s a new approach for the social giant, but not new to Silicon Valley. Alphabet has also been laying fiber in the United States, though those efforts have hit road blocks, including layoffs, in part because digging up the dirt and laying fiber cable is expensive.
Facebook declined to share details on the cost of the fiber project in Uganda.
Africa is home to over 1.2 billion people, but only 226 million smartphones were connected to the internet by the end of 2015, according to The Guardian. That number is expected to triple by 2020.
The 5th Annual Humans to Mars Summit (H2M) will be held from May 9-11, 2017 at The George Washington University in Washington, DC. H2M is the largest conference in the world focused on the goal of sending humans to Mars and will feature some of the most prominent and influential people in business, government and academia. The conference will explore critical policy goals and technology solutions required for the human exploration of Mars and the significant progress that has been made since the first H2M was held in 2013.
“Children born in 2017 are more likely than any generation before them to witness, before their 18th birthday, humans walk on another planet for the first time,” said Explore Mars, CEO Chris Carberry. “For more than five years, the Humans to Mars Summit has been at the forefront of policy and technology decisions that have had a major impact on U.S. space policy. Today we have unprecedented support for Mars exploration from Congress, industry, and the general public. If we make the right decisions, humans will be on the surface of Mars within the next two decades, and the economic and scientific benefits to our country and the world will be unprecedented.”
H2M 2017 will be a platform for discussion on major technical, scientific, and policy challenges that need to be overcome in order to send humans to Mars by the early 2030s. The Summit will also feature topics such as international partnerships and cooperation, the impact on small business and innovation, Hollywood and the Mars story, risk tolerance in space exploration, the role of the Moon in sending humans to Mars, and space diplomacy.
Confirmed speakers include: Buzz Aldrin (Apollo XI, Gemini XII), William Gerstenmaier (NASA: Associate Administrator, HEO), Penny Boston (NASA: Director, Astrobiology Institute), Steve Jurczyk (NASA: Associate Administrator, STMD), Clementine Poidatz (National Geographic Series, Mars), John Grunsfeld (former NASA Associate Administrator and astronaut), Artemis Westenberg (President, Explore Mars, Inc), Thomas Zurbuchen (NASA: Associate Administrator, SMD), Abigail ‘Astronaut Abby’ Harrison (Student; The Mars Generation), Jim Cantrell (CEO of Vector Space Systems), James Green (NASA: Director, Planetary Science), Janet Ivey (Janet’s Planet), Joe Cassady (Aerojet Rocketdyne: Executive Director, Space), Mat Kaplan (Planetary Radio, The Planetary Society) and Ann Merchant (Science and Entertainment Exchange).
Said Explore Mars President, Artemis Westenberg, “H2M will host substantive NASA workshops on policy, STEaM competitions for our youth and important debates on strategies for space transportation and human habitats. This is the single best opportunity for all of us to come together and advance the mission that will one day make humans a two-planet species.”
For registration information visit http://h2m.exploremars.org. To become an event sponsor, please contact carberry(at)exploremars(dot)org.
Technology is ever evolving just like health care. Believe it or not technology is here to stay and will be in some way a major part of our lives. What I discuss in this book is innovation of established companies. This one book will not cover all technological advances but I guarantee you will learn something new and enhance your love for technology.
Stephen Hawking said, “We must … continue to go into space for the future of humanity. I don’t think we will survive another 1,000 years without escaping beyond our fragile planet.” Whether that’s true or not lets face the fact that we most definitely need tech companies innovation and solar projects to help us.
Investor and Dallas Mavericks owner Mark Cuban reiterated his warning that total robot takeover of blue-collar manufacturing jobs could come sooner than people may expect.
“Automation is going to cause unemployment and we need to prepare for it,” he tweeted on Sunday night, sharing a Medium article about similar warnings in recent weeks by Bill Gates, Elon Musk, and Stephen Hawking.
In December, Cuban, a Shark Tank judge who has invested in Amazon and Netflix, penned a blog post that called on the President of the United States Donald Trump to make America a world leader in robotics, otherwise, “if nothing in the States changes, we will find ourselves dependent on other countries for almost everything that can and will be manufactured in a quickly approaching future.”
Over the past few decades, oil and gas revenue has helped the United Arab Emirates develop at a breakneck pace. Its glistening megacity Dubai is now home to the world’s tallest building and countless other accolades, while just last year there were new plans announced to build a completely new “city of happiness.”
The UAE’s latest venture may set new heights in terms of ambition, however. On Tuesday, at the sidelines of the World Government Summit in Dubai, the UAE announced that it was planning to build the first city on Mars by 2117. According to CNBC, UAE engineers presented a concept city at the event about the size of Chicago for guests to explore.
In a statement, Sheikh Mohammed bin Rashid Al Maktoum, ruler of Dubai and vice president of the UAE, sounded confident about the project. “Human ambitions have no limits, and whoever looks into the scientific breakthroughs in the current century believes that human abilities can realize the most important human dream,” Maktoum said.
And despite the grandiose nature of the idea, the 100-year-plan does emphasize some practical steps. “The Mars 2117 Project is a long-term project,” Maktoum explained in the statement, adding that the first order of business would be making space travel appeal to young Emiratis, with special programs in space sciences being set up at universities in the UAE.
The project will also create an Emirati scientific team, but that would expand to include international scientists. In particular, these teams would be seeking to develop faster transportation to and from the planet, as well as researching what the settlement would look like and how it will be sustainable in terms of food, energy and transportation.
This won’t be the Gulf state’s first foray into space travel. The UAE launched its own space agency in 2014, which launched partnerships with French and British space agencies the next year. It is planning to send an unmanned probe to Mars by 2021, a project that was described as “on track” just last month.
Of course, whether the plan for a city on Mars will actually come to fruition a century from now is hard to predict. However, in a strange way, this might be a good thing. Other recently announced space exploration plans, particularly those focused on Mars, have been criticized for setting too ambitious a time frame given the huge costs of such a mission. By setting such a distant goal, the UAE’s ambitious city becomes a little more realistic.
For the UAE, these attempts to break into space technology may also reveal an anxious attempt to break away from the country’s reliance on oil and gas and related industries, having been hit hard by falling prices recently. Thankfully for them, there’s still plenty of money in sovereign wealth funds to invest in Mars.
Flipboard 4.0 is a response to our ever-growing ecosystem of publishers, Flipboard Magazines, topics and more. With over 30 million magazines created, thousands of publishers on our platform, and tens of thousands of topics—plus input from social networks like Twitter, YouTube and LinkedIn—we re-imagined Flipboard to more effortlessly get you to the things you love. Long-time readers who follow lots of content should find a more streamlined experience, while new users will be able to dive right into their passions with minimal setup.
At the heart of this edition is the Smart Magazine, a new way to organize the world’s stories, curated by experts and enthusiasts, into continually updating collections that can be personalized by you. Your Smart Magazines have a sleek new ‘shelf space’—the Home carousel—for quick access to important and inspiring content about your passions.
You can also build your own Custom Magazine, which allows you to be even more precise about what’s in the content mix. Create a personal or group magazine where you can add your favorite stories, or make a Custom Smart Magazine that includes content from any source, person, publication or even hashtag you like.
Your Home carousel houses up to nine Smart Magazines. Your profile area, located behind your avatar in the top right, hosts everything else you’re following on Flipboard, plus the ability to search our entire platform.
The more you interact with Flipboard, the better your experience will be: the algorithm learns from what you follow, heart and add. Liking great content helps your friends and followers, too, as those stories are more likely to surface to them.
We’ve got even more tips and tricks in this post, including how to further hone your Flipboard and ensure that our system continues to deliver the best possible stories to you.
We hope you’ll find the all-new Flipboard to be as multifaceted as you are. As complex humans, our passions define who we are, and your Flipboard is no different.
As 3D printing is becoming increasingly popular, many people are striking a fortune and making it big in the industry. Such achievement may not be a cakewalk, but then seeking ways to get a breakthrough is by far the only way to get there and realize what others have conceptualized into moneymaking means. If you have always had interest in 3D printing and you are into making money with this technology, it is about time you worked smart to beat the challenges ahead.
3D printable models may be created with a computer-aided design (CAD) package, via a 3D scanner, or by a plain digital camera and photogrammetry software. 3D printed models created with CAD result in reduced errors and can be corrected before printing, allowing verification in the design of the object before it is printed.
Several projects and companies are making efforts to develop affordable 3D printers for home desktop use. Much of this work has been driven by and targeted at DIY/Maker/enthusiast/early adopter communities, with additional ties to the academic and hacker communities.
Three-dimensional printing makes it as cheap to create single items as it is to produce thousands and thus undermines economies of scale. It may have as profound an impact on the world as the coming of the factory did….Just as nobody could have predicted the impact of the steam engine in 1750—or the printing press in 1450, or the transistor in 1950—it is impossible to foresee the long-term impact of 3D printing. But the technology is coming, and it is likely to disrupt every field it touches. — The Economist, in a February 10, 2011 leader
Everyone knows how progressive Japan’s technology and business industries are. But this year, it seems like “The Land of The Rising Sun” has reached a new milestone with the upcoming Kyocera phones.
Kyocera Corporation’s President Goro Yamaguchi officially announced the release of their upcoming Kyocera phones in a press release just last week and here is a big surprise: these phones are washable! Yes, that is right. Also known as the rafre smartphone, this DIGNO Rafre successor allows one to wash his/her smartphone by using a foaming hand soap or body soap, as reported on Kyocera‘s website. But the question is, how does a washable phone really work?
Maintaining smartphones can be a daunting task. One should have to be extra careful about getting more dust inside or dripping liquid all over the internal parts. One should also be careful about the cleaning materials being used. It should not be pointed or have a rough surface, which can result in permanent scratches on the screen or cause damage to other external parts. Well, there is no need to worry about any of these with a washable smartphone.
A washable smartphone can simply be cleaned by hand and just by using foaming hand soap or body soap. Just gently rub the soap all over the smartphone’s exterior and rinse it off. According to PCMag, it can even be dunked in a bowl of water and scrubbed by hand until the grime comes off. Moreover, this phone is resistant to hot water. It also allows one to use it even with wet hands or gloved hands.
Other features include the special cooking app and the hand gesture feature, which can be used to search for recipes, set timers, or take calls. This is primarily made for those who spend a lot of time in the kitchen, cooking at home for family and friends. The washable smartphone, with model name Rafre KYV40 will be available in Japan this March.