Showing posts with label Software. Show all posts

Starbucks is going to try out a mobile order-only store



Starbucks introduced its mobile ordering system in 2015, and it’s been a victim of its own success in some ways. Customers at popular spots are eager to use the mobile ordering system to choose their selection and pay in advance, in the hopes of avoiding a line – but they’re having to wait anyway, thanks to a virtual queue that’s as large or larger than the real one, depending on the spot. Now, it’s looking for ways to make mobile ordering work better, and in pursuit of that goal it’s going to trial a location that exclusively serves mobile order customers, within its own Seattle HQ.


The location will go mobile-only starting next week, Reuters reports, turning one of the Seattle-based company’s two internal cafes into a dedicated mobile order and pay location. All mobile orders from building employees, which include 5,000 people, will be routed to the new location, and it’ll feature a different design, with a prominent pick-up window that also offers a view to baristas preparing the orders, according to the report.


Starbucks added the order ahead and pick-up option to its app across the U.S. in September, 2015, and it’s been a popular feature among users since. The feature allows users to browse the Starbucks menu within the app, select a location, and pay for their order ahead of time, receiving an estimate about when it’ll be ready to pick up. Depending on the location, the order will then be left at a designated pick-up location, or called out by a barista for pick-up like orders made in-store.


Featured Image: Starbucks.com

The new algorithms enabling Facebook’s data fixation



A billion and a half photos find their way onto Facebook every single day and the company is racing to understand them and their moving counterparts with the hope of increasing engagement. And while machine learning is undoubtedly the map to the treasure, Facebook and it’s competitors are still trying to work out how to deal with the spoils once they find them. Facebook AI Similarity Search (FAISS), released as an open source library last month, began as an internal research project to address bottlenecks slowing the process of identifying similar content once a user’s preferences are understood. Under the leadership of Yann LeCun, Facebook’s AI Research (FAIR) lab is making it possible for everyone to more quickly relate needles within a haystack.


On its own, training a machine learning model is already an incredibly intensive computational process. But a funny thing happens when machine learning models comb over videos, pictures and text  — new information gets created! FAISS is able to efficiently search across billions of dimensions of data to identify similar content.


In an interview with TechCrunch, Jeff Johnson, one of the three FAIR researchers working on the project, emphasized that FAISS isn’t so much a fundamental AI advancement as a fundamental AI enabling technique.


Imagine you wanted to perform object recognition on a public video that a user shared to understand its contents so you could serve up a relevant ad. First you’d have to train and run that algorithm on the video, coming up with a bunch of new data.


From that, let’s say you discover that your target user is a big fan of trucks, the outdoors and adventure. This is helpful, but it’s still hard to say what advertisement you should display — A rugged tent? An ATV? A Ford F-150?


To figure this out, you would want to create a vector representation of the video you analyzed and compare it to your corpus of advertisements with the intent of finding the most similar video. This process would require a similarity search, whereby vectors are compared in multi-dimensional space.


In this representation of a similarity search, the blue vector is the query. The distance between the “arrows” reflects their relative similarity.



In real life, the property of being an adventurous outdoorsy fan of trucks could constitute hundreds or even thousands of dimensions of information. Multiply this by the number of different videos you’re searching across and you can see why the library you implement for similarity search is important.


“At Facebook we have massive amounts of computing power and data and the question is how we can best take advantage of that by combining old and new techniques,” posited Johnson.


Facebook reports that Implementing k-nearest neighbor across GPUs resulted in an 8.5x improvement in processing time. Within the previously explained vector space, nearest neighbor algorithms let us identify the most closely related vectors.


More efficient similarity search opens up possibilities for recommendation engines and  personal assistants alike. Facebook M, its own intelligent assistant, relies on having humans in the loop to assist users. Facebook considers “M” to be a test bed to experiment with the relationship between humans and AI. LeCun noted that there are a number of domains within M where FAISS could be useful.


“An intelligent virtual assistant looking for an answer would need to look through a very long list,” LeCun explained to me. “Finding nearest neighbors is a very important functionality.”


Improved similarity search could support memory networks to help keep track of context and basic factual knowledge, LeCun continued. Short term memory contrasts with learned skills like finding the optimal solution to a puzzle. In the future, a machine might be able to watch a video or read a story and then answer critical follow up questions about it.


More broadly, FAISS could support more dynamic content on the platform. LeCun noted that news and memes change every day and better methods of searching content could drive better user experiences.


A billion and a half new photos a day presents Facebook with a billion and a half opportunities to better understand its users. Each and every fleeting chance at boosting engagement is dependent on being able to quickly and accurately sift through content and that means more than just tethering GPUs.


Featured Image: Bryce Durbin

Instagram begins blurring some sensitive content, opens two-factor to all



Instagram is already doing a lot to spot and censor posts that violate its community guidelines, like outright porn, but now it’s also taking steps to block out potentially sensitive material that might not technically run afoul of its rules. The social network is adding a blurred screen with a “Sensitive Content” warning on top of posts that fit this description, which basically means posts that have been reported as offensive by users, but don’t merit takedowns per the posted Instagram guidelines.


This is an app feature that will add a step for users who aren’t worried about being scandalized by images posted on the network, since you’ll have to tap an acknowledgement in order to view the photo or video. The blurred view will show up both in list and grid display modes, and Instagram says it’ll prevent “surprising or unwanted experiences” in the app. It’s likely that Instagram is trying to find a balance between reducing community complaints and keeping its guidelines relatively open.


The second big update is the broad release of two-factor introduction to all users. Previously, this was available only to a limited set of members, despite the near-universal recommendation from security experts that they use two-factor wherever available. You can enable this feature using the gear icon in your profile page and enabling the option to “Require Security Code” under “Two-Factor Authentication.”


Instagram’s new blurred content filters are another step in its continued efforts to clean up its act relative to community abuse and spam. Other steps the social network has taken include disabling comments on individual posts, and offering reporting tools and support within the app for cases that involve potential self-harm.

Facebook looks inward for new AI technical talent



The race is on to attract as much expertise in artificial intelligence as possible at tech companies large and small, and more than a few Silicon Valley giants are looking inward to convert tech talent they already possess into the AI resources they increasingly need. Facebook has its own AI course, which is oversubscribed, according to a new report by Wired, and which is led by one of the leading AI researchers in the world.


Facebook’s Larry Zitnick, who is a key leader at the social networking company’s Artificial Intelligence Research Lab, as well as a Microsoft Research and CMU Robotics alum, teaches a class on deep learning for Facebook employees that draws over-capacity crowds. Zitnick’s course sparks strong competition among engineers who already rank among the best in the world, each vying to come to grips with and excel at a field outside of their original purview, but one that few fail to recognize is the hottest in tech.


On the other hand, AI and deep learning increasingly touch all aspects of the technology business, so experts with understanding of where the overlap might prove most useful in their own original discipline are also going to be very much in demand. There are external efforts underway to help create more of these polyglot deep learning pros, including at online educational firms like Udacity, but new talent isn’t rolling in fast enough from outside sources, traditional and non-traditional alike.


Facebook also offers an AI immersion program, which embed prospects within the work it’s doing in the field. The goal, again, is to spread expertise across the company, and thread deep learning know-how into the organization’s very DNA. Expect this to be the rule for big tech company behavior for the foreseeable future.


Featured Image: Getty Images/Yuri Khristich/Hemera (modified by TechCrunch)