Showing posts with label social media. Show all posts

The new algorithms enabling Facebook’s data fixation



A billion and a half photos find their way onto Facebook every single day and the company is racing to understand them and their moving counterparts with the hope of increasing engagement. And while machine learning is undoubtedly the map to the treasure, Facebook and it’s competitors are still trying to work out how to deal with the spoils once they find them. Facebook AI Similarity Search (FAISS), released as an open source library last month, began as an internal research project to address bottlenecks slowing the process of identifying similar content once a user’s preferences are understood. Under the leadership of Yann LeCun, Facebook’s AI Research (FAIR) lab is making it possible for everyone to more quickly relate needles within a haystack.


On its own, training a machine learning model is already an incredibly intensive computational process. But a funny thing happens when machine learning models comb over videos, pictures and text  — new information gets created! FAISS is able to efficiently search across billions of dimensions of data to identify similar content.


In an interview with TechCrunch, Jeff Johnson, one of the three FAIR researchers working on the project, emphasized that FAISS isn’t so much a fundamental AI advancement as a fundamental AI enabling technique.


Imagine you wanted to perform object recognition on a public video that a user shared to understand its contents so you could serve up a relevant ad. First you’d have to train and run that algorithm on the video, coming up with a bunch of new data.


From that, let’s say you discover that your target user is a big fan of trucks, the outdoors and adventure. This is helpful, but it’s still hard to say what advertisement you should display — A rugged tent? An ATV? A Ford F-150?


To figure this out, you would want to create a vector representation of the video you analyzed and compare it to your corpus of advertisements with the intent of finding the most similar video. This process would require a similarity search, whereby vectors are compared in multi-dimensional space.


In this representation of a similarity search, the blue vector is the query. The distance between the “arrows” reflects their relative similarity.



In real life, the property of being an adventurous outdoorsy fan of trucks could constitute hundreds or even thousands of dimensions of information. Multiply this by the number of different videos you’re searching across and you can see why the library you implement for similarity search is important.


“At Facebook we have massive amounts of computing power and data and the question is how we can best take advantage of that by combining old and new techniques,” posited Johnson.


Facebook reports that Implementing k-nearest neighbor across GPUs resulted in an 8.5x improvement in processing time. Within the previously explained vector space, nearest neighbor algorithms let us identify the most closely related vectors.


More efficient similarity search opens up possibilities for recommendation engines and  personal assistants alike. Facebook M, its own intelligent assistant, relies on having humans in the loop to assist users. Facebook considers “M” to be a test bed to experiment with the relationship between humans and AI. LeCun noted that there are a number of domains within M where FAISS could be useful.


“An intelligent virtual assistant looking for an answer would need to look through a very long list,” LeCun explained to me. “Finding nearest neighbors is a very important functionality.”


Improved similarity search could support memory networks to help keep track of context and basic factual knowledge, LeCun continued. Short term memory contrasts with learned skills like finding the optimal solution to a puzzle. In the future, a machine might be able to watch a video or read a story and then answer critical follow up questions about it.


More broadly, FAISS could support more dynamic content on the platform. LeCun noted that news and memes change every day and better methods of searching content could drive better user experiences.


A billion and a half new photos a day presents Facebook with a billion and a half opportunities to better understand its users. Each and every fleeting chance at boosting engagement is dependent on being able to quickly and accurately sift through content and that means more than just tethering GPUs.


Featured Image: Bryce Durbin

Facebook Stories looks like an ill-fitting mask



Facebook has finally capped off its strategy of cloning Snapchat’s USP by slotting a camera-first, ephemeral multimedia sharing feature into its entire social sharing estate.


Today it’s flicked the official switch on a global rollout of the feature in the main Facebook app, where these disappearing Stories are pinned to contacts above the News Feed — thereby making them almost impossible to ignore, especially given their fleeting lifespan.


Earlier this month the social sharing giant added a similar visual sharing feature to its Messenger app — triggering complaints that it was messing with the user experience.


It did the same, in February, with its messaging platform, WhatsApp, and also annoyed users by trying to replace a text status feature (which it’s since restored).


The Facebook Snapchat cloning strategy kicked off in August 2016 when the company debuted the disappearing Stories format on its photo and video sharing platform Instagram, clearly the most natural home for the clone.


And Instagram Stories has since apparently managed to dent Snapchat’s growth, which was clearly a core strategic aim for Facebook.


That and creating vastly more video inventory across Facebook’s portfolio of social apps — into which it can inject more lucrative video ads.


Training users to share the kind of content where ads can natively blend is really what Stories is all about.


All video, all video ads


None of this should be surprising. The company has previously publicly suggested its entire platform will be “all video” in the coming years.


It’s also taken user-hostile design decisions such as removing the ability to send text messages from its main Facebook app. (And the aforementioned attempted rubbing out of text statuses in WhatsApp).


So, basically, if you want to spam all your Facebook friends with a video of yourself wearing an animal selfie lens, Facebook will happily put all its tech at your disposal. But if you wish to swap a few words with people in your Facebook network, Facebook actively discourages that by requiring you switch to its Messenger app to do so. It’s very clear where the company’s priorities lie.


Yet it remains to be seen whether Facebook users in general are going to be flocking in droves to engage in the kind of throwaway visual sharing Stories encourages — with the format effectively asking them to repackage private lives into what amounts to a self-promoting public ‘ad format’, complete with stickers, silly effects and so on.


Thing is, for several years Facebook has had a problem with users posting fewer personal updates. This too is not surprising, given the network has something of an identity crisis these days. It’s certainly a far cry from the original concept of linking university friends across a campus.


Instagram is obviously the more natural home for people with a love of visual sharing generally (including those who want to build public followings for what they share). While WhatsApp/Messenger are for communicating privately with friends and/or in more bounded groups. So the question arises, who is Facebook for?


The people in the average Facebook network may well include a number of uni friends but also various family members, workmates across different jobs, folks you once met at a party, friends of friends, old school friends, professional connections and even random strangers.


Such an assortment of ‘connections’ likely does not constitute either a close-knit group of friends nor a unified group of people with shared interests. The only loosely linking factor is they all (maybe) met you at least once in your life.


Nor are Facebook friends likely to be a uniformly active network. I see huge variation in terms of content sharing in my own network, for example.


It will undoubtedly take a certain type of person to want to blanket broadcast Stories across such a varied and variously segmented network. (Stories can be shared with specific Facebook friends only, but the default push for the format is clearly to encourage sharing with all.)


The mask slips


Anecdotally, a very small subset of my own Facebook connections also appear to account for the vast majority of personal updates still being shared. (Doubtless exacerbated by the algorithmic effect of the Facebook News Feed promoting posts that get more engagement).


Could I imagine these most actively sharing Facebook users sharing Facebook Stories? Perhaps a few of them — so an even smaller subset.


But the handful of users I see who are still regularly sharing personal stuff on Facebook appear to be doing so either to spark debate on a particular issue/topic; to entertain and/or garner public attention/likes; or to ask for (and in so doing share) information/advice with a group — functions that all feel secondary as far as Stories is concerned, given the emphasis here is squarely on visual entertainment.


I may be wrong but it’s very hard to imagine serious or substantial topics being debated via Stories, what with all the selfie lenses, movie masks, and visual effects Facebook is touting…



Stories can also be posted to the Facebook News Feed. So there is at least the possibility that someone could use the format to try to garner comments in the usual way, by turning it into a standard piece of public Facebook content.


But I can’t imagine how such a promotional format could sensitively touch on some of the topics I’ve seen discussed across Facebook in recent years, including very difficult issues like child abuse, depression and marriage breakdown. A selfie lens really isn’t going to fit.

Instagram begins blurring some sensitive content, opens two-factor to all



Instagram is already doing a lot to spot and censor posts that violate its community guidelines, like outright porn, but now it’s also taking steps to block out potentially sensitive material that might not technically run afoul of its rules. The social network is adding a blurred screen with a “Sensitive Content” warning on top of posts that fit this description, which basically means posts that have been reported as offensive by users, but don’t merit takedowns per the posted Instagram guidelines.


This is an app feature that will add a step for users who aren’t worried about being scandalized by images posted on the network, since you’ll have to tap an acknowledgement in order to view the photo or video. The blurred view will show up both in list and grid display modes, and Instagram says it’ll prevent “surprising or unwanted experiences” in the app. It’s likely that Instagram is trying to find a balance between reducing community complaints and keeping its guidelines relatively open.


The second big update is the broad release of two-factor introduction to all users. Previously, this was available only to a limited set of members, despite the near-universal recommendation from security experts that they use two-factor wherever available. You can enable this feature using the gear icon in your profile page and enabling the option to “Require Security Code” under “Two-Factor Authentication.”


Instagram’s new blurred content filters are another step in its continued efforts to clean up its act relative to community abuse and spam. Other steps the social network has taken include disabling comments on individual posts, and offering reporting tools and support within the app for cases that involve potential self-harm.

Social media firms facing fresh political pressure after London terror attack



Yesterday UK government ministers once again called for social media companies to do more to combat terrorism. “There should be no place for terrorists to hide,” said Home Secretary Amber Rudd, speaking on the BBC’s Andrew Marr program.


Rudd’s comments followed the terrorist attack In London last week, in which lone attacker Khalid Masood drove a car into pedestrians walking over Westminster bridge before stabbing a policeman to death outside parliament.


Press reports of the police investigation have suggested Masood used the WhatsApp messaging app minutes before commencing the attack last Wednesday.


“We need to make sure that organisations like WhatsApp, and there are plenty of others like that, don’t provide a secret place for terrorists to communicate with each other,” Rudd told Marr. “It used to be that people would steam open envelopes or just listen in on phones when they wanted to find out what people were doing, legally, through warranty.


“But on this situation we need to make sure that our intelligence services have the ability to get into situations like encrypted WhatsApp.”


Rudd’s comments echo an earlier statement, made in January 2015, by then Prime Minister David Cameron, who argued there should not be any means of communication that “in extremis” cannot be read by the intelligence agencies.


Cameron’s comments followed the January 2015 terror attacks in Paris in which islamic extremist gunmen killed staff of the Charlie Hebdo satirical magazine and shoppers at a Jewish supermarket.


Safe to say, it’s become standard procedure for politicians to point the finger of blame at technology companies when a terror attack occurs — most obviously as this allows governments to spread the blame for counterterrorism failures.


Facebook, for instance, was criticized after a 2014 report by the UK Intelligence and Security Committee into the 2013 killing of solider Lee Rigby by two extremists who had very much been on the intelligence services’ radar. Yet the parliamentary ISC concluded the only “decisive” possibility for preventing the attack required the Internet company to have pro-actively identified and reported the threat — a suggestion that effectively outsources responsibility for counterterrorism to the commercial sector.


Writing in a national newspaper yesterday Rudd also called for social media companies to do more to tackle terrorism online“We need the help of social media ­companies: the Googles, the Twitters, the Facebooks, of this world,” she wrote. “And the smaller ones, too — ­platforms like Telegram, WordPress and Justpaste.it.”


Rudd also said Google, Facebook and Twitter had been summoned to a meeting to discuss action over extremism, as well as suggesting the government is considering including new proposals to make Internet giants take down hate videos quicker in a forthcoming counterterrorism strategy — which would appear to mirror a push in Germany. The government there proposed a new law earlier this month to require social media firms to remove illegal hate speech faster.


So, whatever else it is, a terror attack is a politically opportune moment for governments to apply massively visible public pressure onto a sector known for engineering workarounds to extant regulation — as a power play to try to eke out greater cooperation going forward.


And US tech platform giants have long been under the public counterterrorism cosh in the UK — with the then head of intelligence agency GCHQ arguing, back in 2014, that their platforms had become the “command-and-control networks of choice for terrorists and criminals”, and calling for “a new deal between democratic governments and the technology companies in the area of protecting our citizens”.


“They cannot get away with saying… “


As is typically the case when governments talk about encryption, Rudd’s comments to Marr are contradictory — so on the one hand she’s making the apparently timeless call for tech firms to break encryption and backdoor their services. Yet when pressed on the specifics she also appears to claim she’s not calling for that at all, telling Marr: “We don’t want to open up, we don’t want to go into the cloud and do all sorts of things like that, but we do want [technology companies] to recognise that they have a responsibility to engage with government, to engage with law enforcement agencies when there is a terrorist situation.


“We would do it all through the carefully thought through, legally covered arrangements. But they cannot get away with saying ‘we are in a different situation’ — they are not.”


So, really, the core of her demand is closer co-operation between tech firms and government. And the not so subtle subtext is: ‘we’d prefer you didn’t use end-to-end encryption by default’.


After all, what better way to workaround e2e encryption than to pressurize companies not to pro-actively push its use in the first place… (So even if one potential target’s messages are robustly encrypted, the agencies could hope to find one of their contacts whose messages are still accessible.)




A key factor informing this political power play is undoubtedly the huge popularity of some of the technology services being targeted. Messaging app WhatApp has more than a billion active users, for example.


Banning popular tech services would not only likely be technically futile, but any attempt to outlaw mainstream networks would be tantamount to political suicide — hence governments feeling the need to wage a hearts and minds PR war every time there’s another terrorist outrage. The mission is to try to put tech firms on the back foot by turning public opinion against them. (Oftentimes, a goal aided and abetted by sections of the mainstream UK media, it must be said.)


In recent years, some tech companies with very large user-bases have also been shown to make high profile stances championing user privacy — which inexorable sets them on a collision course with governments’ national security priorities.


Consider how Apple and WhatsApp have recently challenged law enforcement authorities’ demands to weaken their security system and/or access encrypted data, for instance.


Apple most visibly in the case of the San Bernardino terrorist’s locked iPhone — where the Cupertino company resisted a demand by the FBI that it write a new version of its OS to weaken the security of the device so it could be unlocked. (In the event, the FBI paid a third party organization for a hacking tool that apparently enabled it to unlock the device.)


While WhatsApp — aside from the fact the messaging giant has rolled out end-to-end encryption across its entire platform, thereby vastly lowering the barrier to entry to the tech for mainstream consumers — has continued resisting police demands for encrypted data, such as in Brazil, where the service has been blocked several times as a result, on judges’ orders.


Meanwhile, in the UK, the legislative push in recent years has been to expand the investigatory capabilities of domestic intelligence agencies — with counterterrorism the broad-brush justification for this push to normalize mass surveillance.


The current government rubberstamped the hugely controversial Investigatory Powers Act at the back end of last year — which puts intrusive powers that had been used previously, without necessary being avowed to parliament and authorized via an antiquated legislative patchwork, on a firmer legal footing — including cementing a series of so-called “bulk” (i.e. non-targeted) powers at the heart of the UK surveillance state, such as the ability to hack into multiple devices/services under a single warrant.


So the really big irony of Rudd’s comments is that the government has already afforded itself swingeing investigatory powers — even including the ability to require companies to decrypt data, limit the use of end-to-end encryption and backdoor services on warranted request. (And that before you even consider how much intel can profitably be gleaned by intelligence agencies looking at metadata — which end-to-end encryption does not lock behind an impenetrable wall.)


Which begs the question why Rudd is seemingly asking tech companies for something her government has already legislated to be able to demand.


” …stop this stuff even being put up”


Part of this might be down to intelligence agencies being worried that it’s getting harder (and/or more resource intensive) for them to prioritize subjects of interest because the more widespread use of end-to-end encryption means they can’t as easily access and read messages of potential suspects. Instead they might have to directly hack an individual’s device, for instance, which they have legal powers to do should they obtain the necessary warrant.


And it’s undoubtedly true that agencies’ use of bulk collection methods means they are systematically amassing more and more data which needs to be sifted through to identify possible targets.


So the UK government might be testing the water to make a fresh case on the agencies’ behalf — to push for quashing the rise of e2e encryption. (And it’s clear that at least some sections of the Conservative party do not have the faintest idea of how encryption works.) But, well, good luck with that!




Either way, this is certainly a PR war. And — perhaps most likely — one in which the UK government is jockeying for position to slap social media companies with additional extremist-countering measures, as Rudd has hinted are in the works.


Something that, while controversial, is likely to be less so than trying to ban certain popular apps outright, or forcibly outlaw the use of end-to-end encryption.


On taking action against extremist content online, Rudd told Marr the best people to solve the problem are those “who understand the technology, who understand the necessary hashtags to stop this stuff even being put up”. Which suggests the government is considering asking for more pre-emptive screening and blocking of content. Ergo, some form of keyword censoring.


One possible scenario might be that when a user tries to post a tweet containing a blacklisted keyword they are blocked from doing so until the offending keyword is removed.


Security researcher, and former Facebook employee, Alec Muffett wasted no time branding this hashtag concept “chilling” censorship…




But mainstream users might well be a lot more supportive of proactive and visible action to try to suppress the spread of extremist material online (however misguided such an approach might be). The fact Rudd is even talking in these terms suggests the government thinks it’s a PR battle they could win.


We reached out to Google, Facebook and Twitter to ask for a response to Rudd’s comments. Google declined to comment, and Twitter had not responded to our questions at the time of writing.


Facebook provided a WhatsApp statement, in which a spokesperson said the company is “horrified by the attack carried out in London earlier this week and are cooperating with law enforcement as they continue their investigations”. But did not immediately provide a Facebook-specific response to being summoned by the UK government for discussions about tackling online extremism.


The company has recently been facing renewed criticism in the UK for how it handles complaints relating to child safety. As well as ongoing concerns in multiple countries about how fake news spreads across its platform. On the latter issue, it’s been working with third party fact checking organizations to flag disputed content in certain regions. While on the issue of illegal hate speech in Germany Facebook has said it is increasing the number of people working on reviewing content in the country, and claims to be “committed to working with the government and our partners to address this societal issue”.


It seems highly likely the social media giant will soon have a fresh set of political demands on its plate. And that ‘humanitarian manifesto‘ Facebook CEO Mark Zuckerberg penned in February, in which he publicly grappled with some of the societal concerns the platform is sparking, is already looking in need of an update.