define('DISABLE_WP_CRON', true); Broken Links https://broken-links.com Peter Gasston's blog on web development and technologies Wed, 03 Aug 2016 13:40:19 +0000 en-GB hourly 1 https://wordpress.org/?v=6.0.3 Android Instant Apps and the web https://broken-links.com/2016/05/19/android-instant-apps-web/ Thu, 19 May 2016 05:33:51 +0000 http://www.broken-links.com/?p=3024 Today’s Google I/O keynote introduced a new technology called Android Instant Apps. In a nutshell, Instant Apps displays a small, focused portion of an app when a user clicks on a related link — even if the app isn’t installed on their system. The example used in the keynote was the Buzzfeed Video app: when a user clicks on a URL to view a Buzzfeed video, and doesn’t have the app installed, the URL is intercepted by the Android system and opened in a part of the app, delivered from the Google Play servers, rather than in a browser.

https://youtu.be/cosqlfqrpFA

Instant Apps requires the app creator to build their app in a prescribed way, using deep linking and setting up modular views, but it apparently only takes about a day to modify an existing app to work in this way. There are still some unanswered questions about Instant Apps, not least in regards to whether their use is optional.

This certainly solves the biggest problems with apps: having people find them, and install them. And from the perspective of the user it probably makes no difference if they fulfil their task through the medium of a website, app or Instant App — provided the medium is seamless to transition, fast to load, and focused on the task.

But for developers, an Instant App doesn’t seem to offer much value beyond a website: as it’s an Android-only service, the URL that launches it can be shared across iOS and desktop also, meaning there is still a duplication of effort to build that web presence.

So then what’s the incentive of putting in the extra work for Instant Apps? Perhaps it exists only when the app can offer something that the web is unable to, such as access to system-level APIs (like payments); or perhaps when the web alternative isn’t optimised.

Otherwise this seems like it’s a drop-in replacement for the web; and, as Klint Finley said when discussing the conceptually-related universal links on iOS:

If you want apps that work like the web, the web is still your best choice.

installing

]]>
A Little Less Metacrap https://broken-links.com/2015/12/01/little-less-metacrap/ https://broken-links.com/2015/12/01/little-less-metacrap/#comments Tue, 01 Dec 2015 17:06:15 +0000 http://www.broken-links.com/?p=2996 Jeremy Keith wrote a (typically great) post about metacrap, the unneccesarily verbose and repetitive metadata in the head of web documents, that’s required for content to be more easily shareable across social media. I fully agree with his broad point — there’s an awful lot of crap in head — but there’s a flaw in his initial examples. It’s explained in this extract from Twitter’s Getting Started [with Cards] Guide:

You’ll notice that Twitter card tags look similar to Open Graph tags, and that’s because they are based on the same conventions as the Open Graph protocol. If you’re already using Open Graph protocol to describe data on your page, it’s easy to generate a Twitter card without duplicating your tags and data.

So actually the metadata you need to cater for most social sharing is Open Graph, with a few extra tags just for Twitter:

<meta name="twitter:card" content="summary">
<meta name="twitter:site" content="@adactio">
<meta property="og:url" content="https://adactio.com/journal/9881">
<meta property="og:title" content="Metadata markup">
<meta property="og:description" content="So many standards to choose from.">
<meta property="og:image" content="https://adactio.com/icon.png">

I mean, it’s still perhaps too much, and (as pointed out) would probably be best written as JSON-LD in the manifest. But there’s no redundancy, so is not quite as bad as painted in Jeremy’s article, even with his elegant squishing solution.

]]>
https://broken-links.com/2015/12/01/little-less-metacrap/feed/ 1
OK Computer: how to work with automation and AI on the web https://broken-links.com/2015/11/05/ok-computer-how-to-work-with-automation-and-ai-on-the-web/ Thu, 05 Nov 2015 11:37:52 +0000 http://www.broken-links.com/?p=2987 Automated systems powered by new breakthroughs in Artificial Intelligence will soon begin to have an impact on the web industry. People working on the web will have to learn new design disciplines and tools to stay relevant. Based on the talk “OK Computer” that I gave at a number of conferences in Autumn 2015.

In 1996 Intel began work on a supercomputer called ASCI Red. Over the lifetime of its development it cost £43m (adjusted for today’s rate), and at its peak was capable of processing 1.3 teraflops. ASCI Red was retired in 2005, the same year that the Playstation 3 was launched. The PS3 cost £425, and its GPU was capable of processing 1.8 teraflops.

IBM’s Watson is a computer for learning. It was initially built with the aim of beating the human champion of the US game show, Jeopardy, and in 2011, it beat the human champion of Jeopardy — by a lot. (In the picture below, Watson is the one in the middle).

The Watson computer on the gameshow, Jeopardy. Credit: IBM

It’s hard to find development costs for Watson, but a conservative estimate would put them at £12m over ten years. Four years after the Jeopardy victory, the Cognitoys Dino, a smart toy for children, will go on sale. It costs £80, and is powered by Watson.

In 2012 Google used a network of 16,000 connected processors to teach a computer how to recognise photographs of cats. Three years later, the same technology can now successfully identify a photo of a “man using his laptop while his cat looks at the screen”.

Man using his laptop while his cat looks at the screen

I’ve told these three stories to make a point: that cheap and plentiful processing has allowed Artificial Intelligence to improve very much, very quickly.

There are lots of different strands to A.I., but the big recent breakthroughs have been in deep learning using neural networks. Very broadly, a neural network is a series of nodes that each perform a single action, of analysis or classification, on an input. The result of the action is passed onto another set of nodes for further processing, until the network returns a final output, stating with some degree of certainty what the input is. It can return a rough answer quickly, or a more precise answer slowly. It’s sort of how our brains work.

As an illustration of the potential of deep learning, computer scientists in Germany have used neural networks to analyse the painting styles of the Old Masters and apply them to photographs. This is not simply applying image filters — the system detects sky, water, trees, buildings, and paints them according to the styles of artists such as Turner, Van Gogh, and (shown here) Munch.

A photo of a street, and the same image rendered in the style of Munch

And having conquered the world of art, now these A.I. systems are coming for your job.

This graph from an article on Gartner.com shows a rough approximation of the likelihood of your job being automated by learning systems in the near future. It works on two axes: routine, and structure.

Graph showing the likelihood of a job being automated

To summarise it, if your job deals with abstractions and varies greatly from day to day, you’re probably safe. But if you deal with hard data, like numbers, and you do the same thing every day, it’s probably time to start nervously looking over your shoulder. As a stark illustration of this, recent figures show that the number of employees in the finance departments of major US companies has dropped by 40% since 2004.

More directly related to the Web industry, a recent study by Oxford University and Deloitte indicated that it’s “not very likely” that a web design and development professional will lose their job to automation in the next twenty years. However, their definition of “not very likely” is a 21% chance; to put that more bluntly, one in five of us could be out of work, due to an overall shrinking of the available job market.

There are already signs that automated systems could replace programmers in the future. MuScalpel, a system developed by University College London, can transplant code from one codebase to another without prior knowledge of either; in one test it copied the H.264 media codec between projects, taking 26 hours to do so — a feat which took the team of human engineers 20 days. And Helium, from MIT and Adobe, can learn (without guidance) the function of code and optimise it, providing efficiencies of between 75 and 500 percent in tests.

These systems are a few years away from market, but we’re already starting to see automation move into the industry in smaller ways. Services such as DWNLD, AppMachine, and The Grid offer users a tailored mobile app or website within minutes, with styles and themes based on the content and information pulled from existing social profiles and brand assets. These services, and others like them, will become smarter and more available, skimming away a whole level of brochure sites usually offered by small digital agencies or individuals.

A common criticism of services like The Grid is that they can only produce identikit designs, with no flair or imagination. But look at the collection of websites below; people designed these, and flair and imagination are nowhere to be seen.

Screenshots of homogenous website design

These screenshots are taken from Travis Gertz’ excellent article, Design Machines, in which he highlights the problem:

The work we produce is repeatable and predictable. It panders to a common denominator. We build buckets and templates to hold every kind of content, then move on to the next component of the system.

Digital design is a human assembly line.

Looking back at the Gartner chart on the likelihood of automation, I’d say that “a human assembly line” would be somewhere near the bottom left. And we’ve only ourselves to blame. Gertz again:

While we’ve been streamlining our processes and perfecting our machine-like assembly techniques, others have been watching closely and assembling their own machines.

We’ve designed ourselves right into an environment ripe for automation.

All of the workflows we’ve built, the component libraries, the processes and frameworks we’ve made… they make us more efficient, but they make us more automatable.

However, brilliant writer that he is, Gertz doesn’t only identify the problem; he also offers a solution. And that solution is:

Humans are unpredictable mushy bags of irrationality and emotion.

This is a good thing, because a computer can never be this; it can never make judgements of taste or intuition. Many people are familiar with the Turing test, where a human operator has to decide if they’re talking to another human, or a bot. But there’s a lesser-known test, the Lovelace test, which sets creativity as the benchmark of human intelligence. To pass Lovelace, an artificial agent must create an original program — such as music, or a poem — that it was never engineered to produce. Further, that program must be reproducible, and impossible to explain by the agent’s original creator.

The idea is that Lovelace should be impossible for an artificial agent to pass. Creativity should be impossible for a computer. And it’s this, not tools, that offers us the opportunity to make our roles safe from automation.

Andrew Ng, who helped developed Google’s deep learning systems and now works at Chinese search company Baidu, has serious concerns that automation is going to be responsible for many job losses in the future, and that the best course of action is to teach people to be unlike computers:

We need to enable a lot of people to do non-routine, non-repetitive tasks. Teaching innovation and creativity could be one way to get there.

But as well as learning to be creative, we should also become centaurs — that is, learn to enhance our abilities by combining our instincts with an intelligent use of artificial intelligence. Many smart people have begun considering the implications of this; Cennydd Bowles wrote:

A.I. is becoming a cornerstone of user experience. This is going to be interesting (read: difficult) for designers.

To the current list of design disciplines we already perform — visual, interaction, service, motion, emotion, experience — we will need to add one more: intelligence design.

Previously I said that A.I. is improving very quickly, and this also means that it’s becoming much more available very quickly. Services that were once only available to the internet giants are now available to everyone through APIs and products, at reasonable-to-free prices.

Remember Watson? All of its power is available to use through IBM’s Developer Cloud; their Bluemix cloud platform and a Node SDK gives you access to powerful and sophisticated image and text services via some RESTful APIs. IBM wants Watson to be the ubiquitous platform for AI, as Windows was to the home PC and Android is to mobile; as a result, Developer Cloud is free for developers, and reasonably priced for businesses.

What do you get with Watson? For a start, some visual recognition tools, like the Google one I mentioned at the beginning of this piece. Upload an image, and Watson will make an educated guess at explaining the content of that image. It works well in most cases (although was bizarrely convinced that a portrait of me contained images of wrestling).

These identification errors should reduce in frequency as you give Watson more data, because deep learning thrives on training and data. That’s why all the major online photo tools, from Google Photos to Flickr, entice you with huge storage limits, in many cases practically unlimited; they want you to upload more, because it makes their services better for everyone. These services include automatic tagging of photos and content-based search; Google Photos in particular is very good at this, easily finding pictures of animals, places, or even abstract concepts like art.

Google Photos search results for ‘street art’

Eventually these offerings will raise the expectations of users; if your photo service doesn’t offer smart search, it’s going to seem very dumb by comparison.

Watson can also find themes in batches of photos, offering insights into users interests and allowing for better targeting. This is another reason why photo services want you in: because you become more attractive to advertisers.

I should add that Watson is not the only game in town for image recognition; alternatives include startups like Clarifai, MetaMind, and SkyMind; and Microsoft’s Project Oxford, which powers their virtual assistant Cortana. Project Oxford has best-in-class face APIs, able to detect, recognise, verify, deduce age, and find similar faces; how you feel about that will largely depend on your level of trust in Microsoft.

While image recognition is interesting and useful, the ‘killer app’ of AI is natural language understanding. The ability to comprehend conversational language is so useful that every major platform is adopting it; Spotlight in OSX El Capitan allows you to search for “documents I worked on last week”, while asking Google “how long does it take to drive from here to the capital of France?” returns directions to Paris.

If you want to add natural language understanding to your own apps, one of the best tools around is the Alchemy API, originally a startup but now part of the Watson suite. This offers sentiment analysis, entity and keyword extraction, concept tagging, and much more.

Natural language understanding is a key component in a new wave of recommendation engines, such as those used in Pocket and Apple News. Existing recommendation engines tend to use ‘neighbourhood modelling’, basing recommendations on social graph interaction; but the new AI-powered engines understand the concepts contained in text content, allowing it to be better matched with other, similar content.

Where AI really excels, however, is when applied to conversation. Talking to a computer is nothing new; to give just one example, IKEA have had a customer service chatbot called Anna on their website since at least 2008. But although Anna can answer a straightforward question, she has no memory; if you don’t provide the same information in a follow-up than you did in the previous question, you’ll get a different answer. This isn’t really a conversation, which has requirements as defined here by Kyle Dent:

A conversation is a sequence of turns where each utterance follows from what’s already been said and is relevant to the overall interaction. Dialog systems must maintain a context over several turns.

Maintained context is what today’s AI conversations offer that was previously missing. Google are using this to trial automated support bots, trained on thousands of previously recorded support calls. (The same bots, trained on old movies, can be surprisingly philosophical.) And it’s what powers the new wave of virtual assistants: Cortana, Siri, Google’s voice search, and Amazon Echo. This latter is particularly interesting because it lives in your home, not your phone, car or TV; it’s the first move into this space, soon to be joined by robots with personality, like Jibo, or Mycroft.

All of these virtual assistants share another feature, which is that you can interact with them by voice. Voice isn’t essential to conversation, but it helps; its much easier than using a keyboard in most cases, especially in countries like China which have very complicated character input, and high levels of rural illiteracy.

Making computers recognise words has been possible for a surprisingly long time; Bell Labs came up with a voice recognition system back in 1952, although it could only recognise numbers, and only spoken by a specific person. There was a further breakthrough in the 1980s with the ‘hidden Markov model’, but deep learning has hugely improved voice recognition in the past three years; Google says its voice detection error rate has dropped from around 25% (one misidentified word in every four) in 2012 to 8% (one in twelve) earlier this year — and that’s improving all the time. Baidu says their error rate is 6% (one in seventeen), and they handle some 500 million voice searches every day.

Voice recognition is available in some browsers, namely Chrome and Firefox, through the Web Speech API, and many other products including Watson and Project Oxford offer speech-to-text services. These all require a microphone input, of course, which unfortunately rules out using Safari or any iOS browser at all.

But while voice recognition can identify the individual words in an utterance, it doesn’t understand in any way the meaning of those words, the intent behind them. That’s where the previously-mentioned breakthroughs in natural language understanding come in. There are a growing number of voice-based language understanding products now available, including Project Oxford’s LUIS (Language Understanding Intelligence Service — I love a good acronym) and the startup api.ai. The market leader in parsing complex sentences is Houndify, from SoundHound, but the service I like for its ease of use is Wit.

Wit was once a startup but is now owned by Facebook. It’s free to use although all of your data belongs to Facebook (which may be a deal breaker for some) and is available to every other user of the service — because, as I said earlier, more data gives deep learning systems more power. It has SDKs for multiple platforms, but where it wins for me is in its training system, which makes it very easy to create an intent framework and correct misinterpreted words.

Wit is the power behind M, Facebook’s entry into the virtual assistant market. M is notable because it only lives inside Facebook Messenger, which is a pattern I’m sure we’re going to see much more in the future: the AI-powered shift from the graphical user interface to the conversational; from GUI to CUI.

There’s a reason that Facebook paid an estimated £15 billion for WhatsApp, and it’s not solely their £6.5 billion in sales: it’s because messaging apps are huge, and WhatsApp is the biggest of all, with some 900 million monthly active users. What’s more, messaging apps are becoming even more huge really quickly; they’re the fastest growing online behaviour within the last five years, and an estimated 1.1 billion new users are set to come on board in the next three years.

And messaging apps as we know them in the West are actually very limited compared to messaging apps in Asia, especially China, where they are more like platforms than apps: you can shop, bank, book a doctors appointment… basically, anything you can do on the web today. Messaging apps are a huge growth area, and they’re going to be powered by conversational assistants.

We can see this beginning already with apps like chatShopper (currently Germany only) which lets you talk to a personal shopper to make purchases through WhatsApp; and Digit, a savings app that communicates almost entirely by text message. These currently use a mix of automatic and human operators (this is also how Facebook’s M works right now), but as AI becomes more intelligent the bots will take over from the humans in many cases.

More advanced, fully automated services include x.ai’s ‘Amy’, or the apparently very similar Clara. These are meeting assistants that work by email; you ask them to find a suitable time and place for a meeting, and they communicate with all the participants until the arrangements are made, then email you back with the final details.

Conversational UI is an idea that’s time has come, enabled only now by AI and natural language understanding. To add it to your own apps you could look at Watson’s Dialog service (a similar service is also apparently coming to Wit in the near future), or a startup such as re:infer. But it’s not only a case of plumbing in a service, it will also require an addition to the list of design disciplines I mentioned previously: conversation design.

I should note that new interaction models are still prone to old problems; security, privacy and trust should always be paramount in your applications. Remember the Samsung TV scandal earlier this year? Do we really want a repeat of this but with an artificially intelligent Barbie in children’s bedrooms?

The ready availability of deep learning services has come upon us so quickly that we’ve barely realised; many of the services I’ve mentioned didn’t exist even 18 months ago. This is a little bit scary, and a huge opportunity. There’s little doubt that AI’s going to take routine jobs from the web industry; so as AI improves, we need to improve with it. The way to do that is to harness AI for our own use, and apply creative, irrational, human thinking to it.

Cross-posted to Medium.

]]>
The Changing Form of the Web Browser https://broken-links.com/2015/11/04/the-changing-form-of-the-web-browser/ Wed, 04 Nov 2015 12:21:42 +0000 http://www.broken-links.com/?p=2985 I wrote an article, The Changing Form of the Web Browser, for rehabstudio (my employer). It’s about the present and near-future of the web browser, in a market where the consumption of information and services is shifting. It’s quite a long piece, and necessarily broad for a non-technical audience, so there is perhaps a lack of nuance in its conclusions. Still, I’m quite proud of it, a lot of research and writing was involved.

There’s an extract below, but I suggest you read the whole thing in context if you can.

The changing shape of content and services

In the past, a publisher would put content – say, a news story or other longer-form article – on their website, and people would visit the website to read it. In a modern information flow, content is published on the website first, but then pushed out to Flipboard, Facebook Instant Articles, Apple News, et al. It might perhaps even be modified for video platforms like Snapchat. Discovery is rarely through the home page of a news website, but more often through social channels, apps, and email.

In this model the publisher becomes a wire service, sacrificing control over how the content is displayed, and direct advertising revenue, for a greater audience. Many (most?) people will never view the content in its original home on the web, except perhaps as a link to a web view in a browser embedded inside an app like Twitter.

It’s not only content that’s seeing this shift, the way we access information services has also changed. A web portal like Yahoo! collates everything that a user could want – news, weather, stock information, shopping bargains, etc – into a single destination. People today still want the same things, but rarely in the same place, preferring instead to break up the information into multiple single-focus apps which they can access more conveniently.

Some information, such as calendar appointments, map directions, or bus timetables, is only useful at certain times. Digital assistant apps like Google Now, Siri, and Cortana, with devices like Android Wear and Apple Watch, promise to deliver relevant information at appropriate moments, without any interaction with a browser at all.

Sergio Nouvel identifies this change as “a shift from web pages to web services: self-sufficient bits of information that can be combined to other services to deliver value”. The logical conclusion to this is that the browser may disappear almost entirely in the future, as the information we require from it is capable of being displayed by other means. But in the meantime, it’s useful to look at how browsers are adapting to these changes.

The Changing Form of the Web Browser.

]]>
Innovation in Mobile Browsers, and the iPhone https://broken-links.com/2015/09/14/innovation-in-mobile-browsers-and-the-iphone/ https://broken-links.com/2015/09/14/innovation-in-mobile-browsers-and-the-iphone/#comments Mon, 14 Sep 2015 14:30:17 +0000 http://www.broken-links.com/?p=2951 Last week I wielded the mighty power of Twitter to say this:

If you use an iPhone I feel a bit sorry for you, because you’re missing out on the really innovative stuff happening in mobile browsers.

A few people asked me what I meant by that, perhaps thinking that I was criticising iPhones in general (I wasn’t[1]), so I want to take a moment to elaborate on my statement. To do that, I’ll begin with a story.

On the day that I sent the tweet, I had earlier received this notification on my Android phone (and watch):

Android phone and watch showing a Facebook notification

The content of the message isn’t relevant; the important thing here is that this is a notification from Facebook, and I don’t have the Facebook app installed on my phone. Neither did I have the Facebook web app open in my browser at the time I got this notification.

The reason I saw this notification is that the last time I visited m.facebook.com I was shown a dialog asking if I wanted to allow notifications (I did):

chrome-notification-dialog

And the reason I saw this dialog is because Facebook have recently implemented the Notifications API, using Service Workers, on their mobile site—if your browser supports it. And my browser does, because I don’t use an iPhone.

I’ve written before about how, in my opinion, the long-term health of the open web is at risk. Paul Kinlan recently gave a talk, The Future of the Web on Mobile (I highly recommend reading this), in which he more plainly states the form of that risk – but also states the measures that are in the works to combat it. In short, to make the web more competitive with native apps while keeping its existing advantages.

These measures include details such as an improved flow for adding a home screen launcher, and making a web app feel part of the operating system with browser chrome theme colours and a loading screen background colour declared in the manifest. Beyond this, the critically important service workers allow for offline file caching, background syncing, and the push notifications that started all this ruminating – and which should soon be further extended to allow user interaction. And, slightly further in the future, new hardware APIs like Bluetooth and NFC will permit interaction with the physical web.

The implementation of these innovations is largely being led by Chrome, but they’re also available or incoming (to varying degrees) in Firefox and Opera. And none of those browsers are available on iOS. Now, as far as I know, Apple could also be working on a lot of this in Safari – but that’s updated annually at most, so realistically, unless they change their release pattern, the earliest iPhone users will be seeing any of this is September 2016.

So that’s what I meant when I said:

If you use an iPhone I feel a bit sorry for you, because you’re missing out on the really innovative stuff happening in mobile browsers.

[1] My working title for this post was ‘iPity the Fools’, which works as a gag but sounds like flame bait, which is not my intention. iPhones are great, I just think Apple’s browser policy is awful and they don’t prioritise the web.

]]>
https://broken-links.com/2015/09/14/innovation-in-mobile-browsers-and-the-iphone/feed/ 2
Feeling Like An Unwelcome Guest on medium.com https://broken-links.com/2015/09/01/feeling-like-an-unwelcome-guest-on-medium-com/ https://broken-links.com/2015/09/01/feeling-like-an-unwelcome-guest-on-medium-com/#comments Tue, 01 Sep 2015 09:50:39 +0000 http://www.broken-links.com/?p=2927 I have a shortcut to medium.com on the home screen of my Android phone. It’s there because I browsed the site a couple of times and Chrome’s app install banners prompted me to add it to my home screen, so I did. Some time later I launched the site from the shortcut icon and it opened and loaded so quickly that I actually thought it had retrieved a copy from an offline cache. But it hadn’t, it was just very well optimised. So ten points to the Medium team for that.

Today I launched the site from the shortcut again – but this time the experience was somewhat different. So different that I have to take away all the points I previously awarded to the team. The problem is that when I launched the site today, I had the door emphatically slammed in my face.

Doorslam message on medium.com

“Medium is better when it lives on your home screen”, they say. Well, it does live on my home screen; that’s where I launched it from. What they actually mean by these words is: ‘you should install the app’. And I know they mean this because it then shows me a button, urging me to ‘continue in app’, which takes me to the Medium app install page on Google Play. And just below that is an app install banner which lets me skip Google Play and install the app directly. Also, there seems to be no way to continue without installing the app.

The message ‘no thanks’ with secondary priority

Wait… my mistake – there’s a small, somewhat apologetic, secondary ‘no thanks’ link hidden behind the app banner. This makes me feel a little guilty and silly for wanting to take what I understand to be the apparently dumb action of viewing medium.com content on the web.

Still, once I took the sucker’s option of clicking ‘no thanks’, I could at least see the stories on medium.com. So I clicked the top one that was recommended for me.

medium-app-3

And the first thing on the page is another banner, this one telling me to “change my mind” and install the app – with a convenient link to install from Google Play.

Team at medium.com: I am very disappointed in you. After two actions required to visit the website (close the app install banner, click ‘no thanks’), you still think I might not want to read the content in my browser? May I ask: are you getting paid by the app install numbers? Because to spend so much time making your website ‘modern and mobile-optimised’ – which it really is – and then to slap on such a frustrating user experience… I don’t know, maybe I should give you the benefit of the doubt, presume your marketing people told you to do this. Because otherwise I’d hate to think anyone could do something so self-sabotaging.

I made the choice to visit your website, and you not respecting that choice leaves me feeling like a very unwelcome guest.

I’ve cross-posted this article to Medium, because I enjoy the irony.

Update: Cara Meverden, product manager for Medium’s Android app, has responded with an explanation of what happened.

]]>
https://broken-links.com/2015/09/01/feeling-like-an-unwelcome-guest-on-medium-com/feed/ 5
Mobile Browsing Around The World https://broken-links.com/2015/08/26/mobile-browsing-around-the-world/ https://broken-links.com/2015/08/26/mobile-browsing-around-the-world/#comments Wed, 26 Aug 2015 10:14:41 +0000 http://www.broken-links.com/?p=2907 I find it fascinating to see the variance in browser use in the diverse regions of the world, and nowhere is that variance more apparent than in mobile web browsers. While in the West we may be used to Chrome and Safari being more or less the only game in town, elsewhere in the world the story is quite different. In this article I’m going to take a look at a few charts which illustrate that difference.

The stats used here are collected from the 30 days prior to 25th August, taken from StatCounter.com. They come with the usual disclaimer about the impossibility of getting completely accurate data, and don’t always include feature phone browsers, so should therefore be treated as indicative rather than conclusive. With the caveats out of the way, let’s begin.

Starting with Europe, we can see that Chrome dominates, with 44.5% of the market. I think it’s safe to presume that most of these come from Android devices, and the ‘stock’ Android browser (I know, there’s no such thing) adds another 16.5% to that share. Safari, on iOS, is the second most used mobile browser in the region, with a 27.6% share. IE Mobile and Opera run a distant fourth and fifth, respectively. Sitting in seventh place with a 1% share – lower even than Blackberry – is UC Browser; I’d imagine most people in this region probably won’t have heard of it, but it’s much bigger in other markets, as will be shown shortly.

StatCounter-browser-eu-daily-20150726-20150824-bar

In North America it’s somewhat different. Chrome drops to second place with 39.2%, and even combining that figure with the Android browser’s 9.2% only barely exceeds Safari’s dominant 45.5% in this continent. IE Mobile and Opera again round out the top five, but with less than half the share they own in Europe. UC Browser beats Blackberry, but still struggles to gain slightly over 1%.

StatCounter-browser-na-daily-20150726-20150824-bar

Oceania’s story is quite similar, although here Safari has an even more dominant 53.5% share, much better than even Chrome’s 30.9% and Android Browser’s 9.8% combined. UC Browser does a little better with 1.9%, IE Mobile is in fifth with 1.6%. Opera is almost invisible here.

StatCounter-browser-oc-daily-20150726-20150824-bar

In South America, Android is even more dominant than in Europe. Chrome has 52.7% share, and Android Browser, 20.4%. Safari gets just 11%, above Opera with 6.8%. IE Mobile has its largest share in any market, with 5.2%, while UC Browser gets 2%.

StatCounter-browser-sa-daily-20150726-20150824-bar

The first big variance from the Android / Apple duopoly comes in Asia. Chrome still has the biggest share with 29.8% and Android Browser is third with 15.9%, but second place is taken by UC Browser with 25% share – its largest in any market. UC Browser uses data compression on images, videos and other assets, a valuable service in parts of Asia where network connectivity is patchy and data plans can be expensive. Opera, which also has advanced data compression features, does well here too, with 13.8% of the market. Safari comes in fifth with 9.9%.

StatCounter-browser-as-daily-20150726-20150824-bar

The biggest outlier is in Africa, where Opera has an enormous 58.7% share, again due to the importance of its data compression features in countries with variable or limited access to network and data. Chrome and Android Browser come a distant second and third with 15.2% and 9.6% respectively. Africa provides UC Browser with its second-highest share in any market: 5.8%. Blackberry’s 3.4% share narrowly beats out Safari’s 3.3%, its lowest showing in any market.

StatCounter-browser-af-daily-20150726-20150824-bar

What conclusions can we draw from these statistics? Firefox is a lame duck on mobile, not managing even a single percentage point in any market. IE Mobile fares better, but not much; while I’m surprised that Blackberry still manages single figures in some markets.

If you take only one lesson from these figures, it’s that if you’re making a website for a global market you should really be testing on Opera and UC Browser, especially their proxy editions.

]]>
https://broken-links.com/2015/08/26/mobile-browsing-around-the-world/feed/ 2
Exploring URL discovery with the Physical Web https://broken-links.com/2015/08/06/exploring-url-discovery-with-the-physical-web/ https://broken-links.com/2015/08/06/exploring-url-discovery-with-the-physical-web/#comments Thu, 06 Aug 2015 16:00:15 +0000 http://www.broken-links.com/?p=2887 One of the emerging concepts that I’m fascinated and excited by is the Physical Web. If you haven’t heard of this, a brief and very coarse summary is that it’s the idea of transmitting URLs from beacon devices, commonly using low-energy Bluetooth (BLE). Current beacon schemes are largely based on Apple’s iBeacon protocol, which transmits a unique ID that requires an app receiver to decode and turn into an action. The Physical Web’s difference is that URL transmission requires no app, decentralising the process.

Making any device able to transmit a URL is rich with possibilities: from super low-friction discoverability of information about nearby places (imagine a page of search results showing only the things immediately around you), to immediate interaction with nearby physical objects. While I’ve still yet to actually build anything using the Physical Web idea, I’ve started to explore what I think it can be useful for.

I think that one of the areas the Physical Web will prosper in is retail, starting with the very simple idea of showing the shops that are around you in an enclosed space (commonly a mall or shopping centre). If each shop had a beacon (or beacon swarm), a person entering the space would have an immediate idea of what they could find within, say, 30m of them. This wouldn’t replace a map, but would at least indicate the available variety of outlets.

blog-pw-2

You could argue that this same scenario could be solved with an app; the app install is a small barrier to entry, and the app would have to be centrally managed to keep up to date with all the stores in the shopping centre, but I expect the line-up of stores remains mostly stable and updating wouldn’t be a major hassle.

But using the Physical Web would mitigate even these small problems: there would be zero install, no requirement for centralised updating, and no app development time required at all.

As an aside, why not combine the two? An app shell which aggregates URLs for all the shops in the space and further crawls their pages for structured data showing offers, exclusive discounts, etc? There is still the app install barrier to pass, but once installed the content would all come from local, decentralised sources.

blog-pw-1

Beyond the mall, the Physical Web would become even more useful in spaces where there’s a high rate of change in the vendors that are present on any given day – for example, a street market. If each vendor has a beacon, visitors to the market could easily see who’s selling that day.

blog-pw-3

I can see there being an advantage in centralising these results too; the host market could set up an aggregator device (or devices), collating all the transmitted URLs and displaying the results on their website. This would inform people at home who was present that day, giving them information they need to plan their visit.

These ideas only cover URL discovery, but open up many possibilities for further interaction.  There are many more ideas already making their way into the Physical Web Cookbook, and I plan to return to this subject in the future. I should reiterate that these are just preliminary ideas based on my limited exposure to the Physical Web concept, and what I really look forward to is collaborating with a broader group to kick the tyres and explore new ideas. To that end, feel free to drop me a line in the comments if you want to talk.

To try out the Physical Web for yourself you’ll need some beacons and a smartphone. I recommend this Beacon Dev Kit from Radius Networks for the former, and for the latter you can install the app for Android, or set up Chrome on iOS. The Physical Web uses the Eddystone protocol, on which I previously wrote a briefing note.

]]>
https://broken-links.com/2015/08/06/exploring-url-discovery-with-the-physical-web/feed/ 4
Hardware APIs coming to browsers https://broken-links.com/2015/07/23/hardware-apis-coming-to-browsers/ https://broken-links.com/2015/07/23/hardware-apis-coming-to-browsers/#comments Thu, 23 Jul 2015 10:26:56 +0000 http://www.broken-links.com/?p=2878 There are many future web stack features that I see as being vitally important to the long-term health of the web. These include extensible web projects such as web components and CSS Houdini, as well as the scripting capabilities in ES7 and beyond. These features give developers better tools, and more fine control and power.

But I feel that what’s more important to the immediate success of the web are features that provide parity with native mobile apps. I’ve written previously about the importance of service workers in providing this parity, but there are also a few new features breaking through that I’m equally excited about, as they provide access to previously unavailable hardware.

The first is the Web Bluetooth API, which has experimental implementation in Chrome OS devices (running the Dev channel, behind a flag). This Promise-based API allows the browser to scan for local Bluetooth Low Energy (BLE) devices, such as speakers or fitness tracking wearables, then interact with them.

Scanning is as easy as requesting a list of local devices, filtered by a list of services – for example, to find a BLE device which transmits battery data:

navigator.bluetooth.requestDevice({
  filters: [{ services: ['battery_service'] }]
}).then(function (device) {
  console.log(device.name);
});

Interacting with the device requires a little more knowledge of the Bluetooth protocol – this great introduction by François Beaufort covers the basics.

Further from implementation is the Web NFC API, which gives access to Near Field Communication devices – such as tap-to-pay systems. Currently only at the spec stage, it’s also Promise-based so seems easy to get started with.

For example, this is how it’s proposed to read data from an NFC device:

navigator.nfc.requestAdapter().then(function (adapter) {
  adapter.onmessage = function (event) {
    console.log(event.message.data);
  }
});

Of course there’s no guarantee that other browsers will implement these APIs (I have particular doubts about Safari), but I’m excited by the possibility that they may be. Because parity with native mobile apps gives the browser the power to be a key component of the thingnet, communicating with the multitude of devices in the smart homes, offices and public spaces of the future.

I know that from the requests we at +rehabstudio get from our clients, connected devices are only becoming more and more popular in creative concepts. And I’m so happy with the notion that in the future I can be building browser-based applications that let me build those concepts.

‘Thingnet’, by the way, is the word I keep using in place of ‘internet of things’, but it doesn’t seem to be catching on so I may stop.

]]>
https://broken-links.com/2015/07/23/hardware-apis-coming-to-browsers/feed/ 1
Eddystone – A Briefing Note on Google’s New Beacon Format https://broken-links.com/2015/07/15/eddystone-a-briefing-note-on-googles-new-beacon-format/ https://broken-links.com/2015/07/15/eddystone-a-briefing-note-on-googles-new-beacon-format/#comments Wed, 15 Jul 2015 14:17:42 +0000 http://www.broken-links.com/?p=2861 Yesterday Google announced ‘Eddystone’, a new open Bluetooth beacon format which works on Android and iOS. I’ve been doing a bit of reading about it to understand the technology and its potential, and I put together a briefing note about it for my colleagues. I’m a believer in maximising returns on my content, so it seems like a good opportunity to republish that briefing note here.

This is a very rapid and shallow look into beacons, and I’ve no doubt made some omissions or inaccuracies, so apologies in advance for that. If you think I’ve made any huge oversights or errors, please feel free to let me know in the comments.

In a Nutshell

Eddystone is, as mentioned, an open Bluetooth beacon standard. It works on Android and iOS. It does all that some current formats – namely iBeacon and Physical Web – do, plus more. It comes with associated APIs and libraries that make it very extensible and powerful.

Branding

Although this is a Google-led initiative, they very much want this to be seen as an open standard. So the branding is just Eddystone, without mention of Google. Fun fact: It’s named after the UK’s Eddystone Lighthouse.

Opportunities

Anything based around immediate location. Google have recently run a project around live transport information in Portland, and are talking about pushing hyperlocal information into Google Now. Think about beacons in shopping centres, museums, art galleries, markets… anywhere you need passive information about the things immediately around you.

Comparing Format Standards

  • iBeacon is an Apple-only standard which transmits a single Unique Identifier (UID). Apps can use this to trigger an action.
  • Physical Web Google-led open initiative which transmits a single URL. Apps can present this to the user or look up related information from metadata.
  • Eddystone Google-led open initiative which combines features of both, plus more. Has a UID, a URL, plus Ephemeral Identifier (EID) and Telemetry (TLM) data – details on these below.

Here’s a quick comparison:

Beacon Payload Platform
iBeacon UID iOS
Physical Web URL Android, iOS
Eddystone UID, URL, EID, TLM Android, iOS

NB: Physical Web will continue as an initiative, but using Eddystone’s URL as a foundation instead of its own UriBeacon format.

Unique Features

As mentioned above, in addition to the UID and URL there are two payload features unique to Eddystone:

  • Ephemeral ID is an ID which changes frequently and is only available to an authorised app, not broadcast generally. It’s intended for short-term use; the use cases mentioned are for finding luggage when travelling, or finding lost keys.
  • Telemetry is data about the beacon or attached sensors: e.g. battery life, temperature, humidity.

APIs

Along with Eddystone, Google announced a handful of new / extended APIs:

  • Proximity Beacon associates beacons with data in the cloud, allowing for very rich information, including latitude and longitude co-ordinates, and an association with the Places API (similar to Facebook BLE Beacons).
  • Place Picker is an extension of Places, showing beacons / places in your immediate vicinity.
  • Nearby uses a combination of Bluetooth, WiFi and ultrasound to make it really easy for devices to find, and communicate with, each other.

Although Eddystone requires these Google APIs to be useful at the moment, the fact that it’s an open standard means that anyone will be able to make open alternatives in the future.

iOS devices require an extra library to work with Eddystone.

The Bigger Picture

Eddystone is part of Google’s push further into the ‘Internet of Things’. These also include:

  • Brillo, a lightweight version of Android for low-powered devices;
  • Weave, a secure home devices protocol for device-to-device or device-to-cloud communication; and
  • Thread, an alternative to Bluetooth for direct communication (not a Google project, but they are a member of this consortium).

Beacons

Unlike iBeacons, which must be approved by Apple, anyone can make an Eddystone-compatible beacon. Current beacon manufacturers include Estimote, Kontakt and Radius Networks.

Further Reading

]]>
https://broken-links.com/2015/07/15/eddystone-a-briefing-note-on-googles-new-beacon-format/feed/ 3