Monday, 30 December 2013

Santa Claus: As likely true as not

[Eds' note: We are pleased to present the annual Social Interface Christmas post by Colin B. Picker. We look forward to hosting more vibrant discussions and debates around the social implications of technology in 2014; in the meantime, wishing you all happy holidays and a great new year.]

Colin B. Picker

This post will, using logic and relying on the current inadequacy of science and technology, show that it may be legitimate for a free-thinking person to believe in Santa Claus.  

Were that not enough, this post may also allay concerns that person may have about mortality.

The route to permitting a free-thinking person to legitimately believe in Santa Claus will start with a rather fundamental concern – mortality.  Mortality can be considered in the context of the three fundamental views of existence, which can be simply stated as three possibilities:
  1. I do not exist
  2. I exist but everyone else is an illusion
  3. I and everyone else exists.
If option one is true then there is no meaning to death, for if I do not exist now then my cessation to exist later, my mortality, is simply not possible.

If option two is true then without me there is no existence, so my non-existence is not possible or makes no sense.

If option three is true it means that others like ourselves exist, which means they too think and have a consciousness.

Here is where science and technology enter the discussion.  At the moment, science and technology cannot explain consciousness other than describing some correlated physical activities within the brain (e.g. neurons shooting off here and there).  The truth is that, despite great advances in medical science and technology, we are no closer to scientifically understanding consciousness than prehistoric humans were to understanding the television.  We really have no idea from a scientific perspective what happens to consciousness when a person dies.  Certainly, we know that the associated neural and other currently understood physical activity ceases with death, but the scientific connection of neural activity to consciousness, thinking and existing, is rudimentary at best.

Science may at some point in the future unravel the connections between consciousness and how the body, specifically the brain, works.  But until then, at a fundamental level science and technology remain almost completely out of the picture in understanding what happens to our consciousness when we die.

As such, all sorts of possibilities remain.  The conventional, non-mystical/spiritual/religious view is that consciousness ceases when we die.  But given the lack of any real understanding of what makes up consciousness, it seems that logically other possibilities may exist.  True, proof for those other possibilities is essentially non-existent, but then there is equally no proof of the expiration of consciousness at death.  

The idea of a soul, a non-corporeal embodiment of our identity, is logically not an invalid option.  So too, heaven, Valhalla, reincarnation or anything that our imagination can conjure up.  All of those possibilities are not ruled out by our current scientific and technological understandings.

So, an agnostic or atheist need not assume that the only alternative option to non-existence at death are those presented by religions.  Scientific understandings today do not require an atheist to accept non-existence at death.  That is but one of the many possibilities opened up by our uncertainty, and arguably it may make as much sense to choose to believe a scenario that is most comforting and most allays one’s concerns about mortality.  The interface with science and technology here is thus through the absence of science and technology.

So, how can this be related to Santa Claus – whose existence is for many a more fundamental question at this time of year?  The answer lies, as it did for our concerns about mortality, in considering the three possibilities that explain our existence.

Under scenario one above - that we do not exist - then presumably too Santa would not exist.  But then we would not care for we too would not exist.  Under scenario two in which I (or you the reader) exist alone, Santa could not then exist (as he is not me or you, the reader).  But then no one else would exist, in which case concern about Santa’s existence would pale by comparison for one’s concern about the non-existence of loved ones.

Rather, it is with the third option - that we all exist - that the likelihood or potential for Santa’s existence is revealed.  As noted above, given the fact that we have no clue about what happens to our consciousness on death, we can then just as plausibly argue that our consciousness does indeed wing its way up to heaven, or to Valhalla or be reincarnated – with all the gods and bible stories and other beliefs that go along with those views.  With the freedom to believe any of these scenarios, it is but a short leap to assume that, as with the afterlife, there may be other fantastic things in the universe that interact with our life even before death.  And among those fantastic things that we are permitted to accept as not unlikely may be an elderly red-suited jolly gentleman, riding a sleigh through the sky, pulled by magical reindeers, delivering presents to children throughout the world.

Image by Kevin Dooley, made available by Creative Commons licence via Flickr.

Thursday, 12 December 2013

Around the world in 80 hashtags

Amanda Parks

Earlier this year, I decided to leave the safety and predictability of day-to-day life and embark on an undefined overseas adventure. I wanted absolute freedom to see, do, relax, reflect and absorb everything without a pre-determined expiration date staring at me like the stamp on a milk carton. Aside from some bookmarked dates and destinations, my slate was clean. Maybe I’d travel for 3 months or 4, or 6 or more, before growing up and returning to work. My approach was admittedly indulgent, but it was the one I needed to ensure my travel bug was sufficiently fed.

When I told various friends and colleagues about my plan I was surprised by how many asked if I’d write a travel blog. There were several reasons why my answer was no. For one, I’d always disliked the sound of the word blog and I didn’t want to be a blogger [Editor’s note: no offence taken]. More importantly, I had a sneaking suspicion that if I committed to writing a blog it would ultimately detract from, rather than add to, the experience I sought. I knew I’d feel pressured to package my days into posts that would be interesting, funny or somehow read-worthy, with the result that I’d spend hours staring at my laptop and poring over words and photos when I’d rather spend those hours staring at the ocean and pouring a deliciously refreshing drink.

The reality is that blogging, sharing, posting, commenting, tagging, and hashtagging have become so prevalent, so expected, that I felt rebellious for choosing to be a relatively quiet traveller. Why wasn’t I updating my Facebook status upon arriving in each place? Why hadn’t I joined Instagram to tell my travel tales through daily photos? Why did I take 4 months to send my first real update to a relatively small list of friends and family (by old-school email, no less)?

Let me be clear - I didn’t entirely boycott social media while travelling. I did post some Facebook updates and photos, and I reaped great benefits that arose solely because of my participation in social media. For example, I chose certain travel destinations after being inspired by friends’ photos, and I met up with friendly faces in foreign places simply because one of us had posted something on Facebook that told the other where we were. Social media can undoubtedly connect and benefit its users (travellers or not) in incredible ways.

But what I feared was getting dragged over to the dark side, the point at which we shift our focus too far away from the live experience and we become preoccupied, too occupied, with how we will capture it, tag it, post it and wait for the “likes” to filter in.

At one point during my trip, I was one of what felt like 5,000 people packed into London’s Sloane Square to watch a large screen on which Andy Murray was seeking to become the first British man to win Wimbledon in 77 years. Last year, he lost in the final and cried. This year, he was hoping to do neither. The pressure on him was monumental, as was the tension that hung over the crowd. When Murray finally won the eruption was incredible: people cheered and clapped and jumped and hugged and did whatever victory dance they could manage on the tiny piece of pavement they’d claimed as their own for the last 4 hours. It was one of those spine-tingling live sporting moments that you’re thrilled to be part of and leaves you feeling like you’ve made a new best friend in the stranger beside you... and it was a moment that I shared with my phone. Ashamed as I am to admit it, I was one of those people who couldn’t clap, jump or hug my human friends beside me because I was busy holding my digital friend above the sea of flailing arms trying to “capture the moment”. While I’m glad to have caught some great footage (which I have actually watched since), the moment would’ve been better if I’d just lived it. I caught myself wondering almost immediately: was this what the dark side felt like?

A photo finish
Happily, my travels involved very few moments like that one and, for the most part, I did what I’d hoped to do when I decided not to blog: I saw, I did, I relaxed, I reflected and I absorbed and I didn’t feel tied to a gadget while doing so.

About a month after that day in Sloane Square, I overheard a brief but brilliant exchange between two friends which, I think, reflects an increasingly unhealthy addiction to social media and the tools that feed it (arguably most striking in its younger users, but the older ones aren’t immune; certain grown-up world leaders have, after all, just been roasted for taking a selfie at Nelson Mandela’s memorial service). After logging into his Facebook account in a hostel foyer, Traveller #1 exclaimed “This is an epic photo, how can I only have 5 likes?!” and traveller #2 replied “Who cares?”. Indeed, who does care? When we post things, who are we posting them for? Should getting only 5 likes or 3 likes (or, horror of horrors, no likes) make our epic travel photo seem any less epic to us?

Social media undoubtedly has its place, but the trick is to ensure it’s used for the right reasons and without letting it detract from our real-life experiences. Because, in the end, the live show is always better than the recording.

Photograph by Amanda Parks.

Friday, 11 October 2013

Read without seeing: improving access to books for visually impaired persons

Sarah Lux-Lee

On 27 June 2013, the anniversary of Helen Keller's birth, a Diplomatic Conference of the World Intellectual Property Organisation (WIPO) adopted the Marrakesh Treaty to Facilitate Access to Published Works for Persons Who Are Blind, Visually Impaired, or Otherwise Print Disabled.  The treaty is intended to ensure that books and other published materials can be made and distributed in formats accessible to people with print disabilities, such as Braille, audio and large print formats.  It does so by obligating its signatories to adopt exceptions to copyright infringement in their domestic laws, to allow accessible copies to be made and distributed within those countries without the need for permission or payment.  It also requires exceptions to enable cross-border circulation of accessible copies of copyright material, in order to reduce the global costs of providing access to copyright works.  Fifty-one countries signed the treaty on 28 June 2013, with several others having followed suit in the months since.  The treaty will enter into force once 20 countries have ratified it.

The treaty is a significant move toward ensuring equality of access to learning materials around the world.  At present, it is estimated that only 5% of the world’s books and published materials are ever published in an accessible format.  In developing countries, where blindness and visual impairment is particularly prevalent, the problem is even more acute, with an estimated 99% of published works never being made available in any accessible format.  The problem is not a technical inability to make the conversions; increasingly, sophisticated technologies are available for the fast and affordable conversion of books and other published materials into Braille, audio and large print versions.  Rather, this “book famine” persists in large part because in many of the world’s content-producing countries the conversion of a published work into an accessible format, and the import or export of such products, would amount to copyright infringement.   

According to a 2006 survey conducted by WIPO, fewer than sixty countries have limitations and exceptions in their domestic copyright laws that enable the creation and distribution of accessible works.  In addition, because of the “territorial” nature of copyright law, the exceptions that do exist in various countries rarely make allowance for the import or export of accessible works, which need to be separately negotiated with rights holders.  The Australian Copyright Act 1968 (Cth) does feature a number of exceptions and a statutory licence relating to the creation and distribution of accessible works; in this sense, Australia is a leader in the effort to ensure equal access and opportunity to those suffering print disabilities.  

The trans-border provisions of the treaty offer the potential for Australia to further enhance its contribution by implementing an additional exception for the import and export of accessible format copies.  This component of the treaty is intended to ensure that the conversion of a published work only needs to occur once, and that the accessible copy can subsequently be made available to those who need it anywhere around the world.  Cross-border circulation of accessible versions of works will enhance access both directly, by increasing the volume of available converted works, and also indirectly by avoiding the costs of unnecessary duplication and freeing resources for the addition of new titles to the global accessible library.  It will have particularly significant implications for blind, visually impaired and print disabled individuals in the developing world.

The adoption of the treaty was a moment of great significance for the beneficiary communities and their advocates, who have worked tirelessly to improve outcomes in this area.  The World Blind Union has expressed hope that the treaty will be an effective step toward the achievement of equality of access, while noting that work in this area is not yet complete:
In plain language, this is a Treaty that should start to remedy the book famine. It provides a crucial legal framework for adoption of national copyright exceptions in countries that lack them. It creates an international import/export regime for the exchange of accessible books across borders. It is necessary for ending the book famine, but it is not sufficient. Countries need to sign, ratify and implement its provisions. Non-profit organizations, libraries, educational institutions and government need to take advantage of these provisions to actually deliver the accessible books people with disabilities need for education, employment and full social inclusion.
Then-Attorney-General Mark Dreyfus QC lauded the agreement as "a landmark copyright treaty, the first of its kind in the history of the multilateral copyright system”. Curiously, despite Australia’s leadership in negotiations and proud reportage of the treaty’s adoption, it was not one of the 51 nations that signed the treaty in June and, at the time of writing, it does not appear to have subsequently signed. Vision Australia and other representative bodies of Australia’s blind, visually impaired and print disabled communities have nevertheless expressed optimism about the future impact of the treaty in Australia and are continuing to work toward signature and ratification.

Image by Diego Molano, made available by Creative Commons licence via Flickr.

Friday, 20 September 2013

Cyberspace? Well, sort of.

Nicholas Sheppard

I recently got around to reading Edward Castronova's Synthetic Worlds (2006). Around the time that Castronova was writing, synthetic worlds — notably Second Life — seemed like big news in the computing community of which I was a member. Major corporations, we were told, were opening offices in Second Life; newly-minted entrepeneurs were establishing successful businesses; and luminaries were giving press conferences. Reading Castronova's book seven years later, though, prompted me to wonder: where are they now?

The worlds themselves are still operating, and are presumably producing revenue sufficient to keep their operators in business. But I no longer hear much about them in the mainstream media, in technology media, or even from gaming friends. I don't feel like I'm living in cyberspace or The Matrix any more than I did in 2006, or even 1996. (I should note at this point that I'm one of those people that the computer games industry is at pains to show doesn't exist any more: a once-young man who, upon becoming older, grew tired of shooting up yet more pixellated baddies. So perhaps everyone has disappeared into synthetic worlds, leaving me alone on the outside wondering where everyone has gone.)

All of the above media, though, have much to say about Facebook and Twitter. And rightly so, to go by the numbers: Facebook has over 1,100 million accounts and Twitter over 550 million, according to Statistic Brain. The largest synthetic world, World of Warcraft, had a comparatively measly twelve million subscribers at its height. Of course twelve million customers is nothing to sniff at, and World of Warcraft is arguably a sounder business proposition in that its users actually pay to be there. Nonetheless, it's World of Warcraft and Second Life that have those ubiquitous "like me" and "follow me" buttons on their home pages, and not Facebook and Twitter with "fight me" and "visit second me".

At least part of the explanation for these numbers is that the population of synthetic worlds is fragmented across numerous distinct worlds catering for individual tastes like fighting dragons, exploring alien worlds or wearing outlandish costumes. Facebook and Twitter, on the other hand, try to appeal to a universal desire to communicate and to maintain relationships. 

Furthermore, communication tools exhibit strong network effects, in which the usefulness of a tool to one person depends on the number of other persons also using that tool. Network effects tend to create winner-takes-all markets in which the player with the greatest market share rapidly drives out all other players: the main reason to join Facebook is that everyone else has joined Facebook, not that Facebook is intrinsically better than any other communication tool.

In that sense, the market for synthetic worlds is a healthier one than that occupied by Facebook and Twitter: each of us is free to choose the world that best meets our needs and means, and entrepreneurs succeed or fail on how well they meet these needs and means. Castronova sometimes uses a metaphor of "migration" to or between synthetic worlds, following an economist's view that people migrate to the places in which they think they will be most happy. So just as I, a computer scientist, might find it attractive to migrate to a city in which there are many computers to be programmed, so might a dragon-fighter find it attractive to migrate to a synthetic world in which there are many dragons to fight.

There is, however, one world from which we cannot migrate. However much we might prefer fighting dragons or designing our own islands, we still need to eat, wash and procreate in the physical world. There's even a school of thought that, even if we could migrate into synthetic worlds, we wouldn't want to. In 1974, Robert Nozick posited the "experience machine", which would provide its user with any experience that he or she desired. Nozick asked: how many people would choose to live his or her life in such a machine? Nozick, and I'm sure many others, think the answer is "almost nobody".

And so to Facebook and Twitter, and, for that matter, older communication tools like telephone and email. To paraphrase a famous observation of Arthur C. Clarke, I imagine that our pre-industrial ancestors would find these tools every bit as magical as dragons, wizards and warp drives. Having augmented our existing world with such wonders, why bother synthesising a new one?

Image by Giampalo Macorig, made available by Creative Commons licence via Flickr.

Tuesday, 13 August 2013

Protecting privacy in the digital era

Tessa Meyrick

The arrival late last month of the new heir to the throne was unsurprisingly attended by a flurry of media interest in the UK and beyond, with reports of the royal birth apparently accounting for a staggering 5 per cent of online news content consumed globally on 22 July 2013. When the (yet-to-be-named) Prince George of Cambridge made his first media appearance the following day, every portal, page, RSS and Twitter feed continued to be jammed with details of the Prince's BMI, speculations as to his naming (commiserations to those who'd put their cash on 'James'), and even the Royal swaddle he left wrapped in. 

Somewhere among all this emerged concern (including from the media itself) over how the Royal parents are to construct some semblance of an ordinary life for the Little Prince once the natal storm has passed. In the UK Government's official response to the news of the birth, Lord Hill of Oareford, Leader of the House of Lords, shared with his peers a hope that the Prince (and his no doubt fatigued parents) be given some privacy. The media agreed, with one major UK newspaper at pains to stress that 'no one, and certainly not the media, would want to deny the Duke and Duchess some time alone with their baby son'.

With the UK Government's plan for a new press regulator (set in chain by the Leveson inquiry) put on the back-burner until the Australian spring, it's uncertain which body in the UK will be responsible for ensuring the media comes good on its commitment to honouring the Royals' privacy. In any case, it's also not entirely clear that it’s the conventional media that’s going to need to be held to account.

Prince George is the first future monarch to grow up in an era of social media and under the gaze of many-a-quick-fingered 'citizen journalist' in possession of a smart phone. Which is to say, Prince George's privacy (or lack of it) won't depend purely on the strength and structure of media regulation in the UK, but will also hang on the development of a freestanding right to privacy in that jurisdiction. For the record: there is no such right in the UK, and nor is there in Australia. But if 'the right to be let alone is indeed the beginning of all freedom', then the influence of Article 8 of the European Convention on Human Rights and the extension of the law in relation to breach of confidence to cover misuse of private information by the Court of Appeal actually puts the UK in comparatively good stead. 

In Australia, the idea that privacy is solely a media regulation issue continues to hold ground. This was helped along by the Federal Government's decision in March this year – expressly in the context of its ill-fated media reforms – to sideline the question of whether Australians should be able to sue for serious invasions of privacy. Concerned that earlier consultations on a privacy tort (the 28 month Australian Law Reform Commission inquiry finalised in 2008 and the Government's own consultations in 2011) showed little consensus on what such a right would look like, the Government has referred the issue to the ALRC for yet another inquiry. That inquiry, 'Protecting privacy in the digital era', kicked off in June. The final report, focusing specifically on the legal design of a statutory cause of action, is due to be delivered in June 2014. Whether that report stays with its earlier counterparts in the 'too hard' basket will remain to be seen.

This piece first appeared on the Allens intellectual property blog, Scintilla.

Tuesday, 11 June 2013

Choosing technology

Lyria Bennett Moses

Do we choose the “things” that make up our lives? Sounds like a stupid question – there is so much choice. Subject to cost, we can choose among smartphones, televisions, computers, clothes, tools, furniture and a myriad of other objects.

But there are some choices that we don’t get to make. There is no choice but to live in a world with cars and roads (even if we choose not to drive), a world where almost every job involves interacting with the “things” chosen by an employer (from computers to industrial machinery), a world where CCTV cameras often monitor our movements, a world with social expectations around the use of technologies such as email and cell-phones. We don’t choose these things – they become part of the background against which our choices are made.

That is not to say that the technological context in which we live is inevitable or that it is not the result of choice. Quite the contrary. The things that make up the background of our lives have been conceived, created, designed and produced as a result of conscious choice, occasionally by governments but most often by private actors. Our world of things is shaped by decisions by engineers, managers, designers, marketers and others.

Are there any possibilities for collectively shaping our own world – for using democratic institutions to impose our collective will on our man-made surroundings? The idea is attractive.

In 1972, the Office of Technology Assessment was set up to give the United States Congress information that would enable decision-making about technology that reflected a wide range of concerns, adopted a long-term horizon and had a sound factual basis. Nor was Australia immune from these ideas. In the late 1970s there was a Committee of Inquiry into Technological Change in Australia, known as the Myers Committee. It was tasked with a mission to “examine, report and make recommendations on the process of technological change in Australia”, in particular around issues such the possibility that some technologies might lead to massive unemployment. One result of this committee was the establishment of the Technological Change Committee of the Australian Science and Technology Council, with a mission to “review on a continuing basis the processes and trends in technological change in Australia and elsewhere; and to evaluate and report on the direct and indirect effects of technological change at the national level”. Other projects with similar dreams of shaping the future included the Commission for the Future launched in 1985 (and closed in 1998).
While none of these entities still exist, ideas about involving government and citizens in technological decision-making are not confined to history. Many European parliaments have created or sponsored, in different forms, offices of technology assessment. In Australia, procedures have been developed for engaging with publics in relation to decision-making around new technologies.

To what extent should democracies seek to influence the course of technological development or influence technological design? Sometimes, the choice seems easy, such as where a government bans human reproductive cloning or passes regulations that control developments in a particular field. But most of the time, we simply encourage innovation (for example, through patent law and R&D funding) without thinking too much about its impacts.

Which comes back to the original question – should there be some collective efforts in a democracy to shape or influence technological development? Can we choose the many things that shape our world? Or are our choices limited to those we make, as consumers, among products conceived and developed by others?

Tuesday, 14 May 2013

The bizarro world of technology investment

Rob Clark

I recently read an excellent blog post by Elmo Keep discussing the problems that have arisen as technology has stripped the profit and value out of content creation.

The article points the finger at services like Spotify and at us as consumers for making this devaluation happen. However, I believe you can't really fault consumers for behaving like rational beings and chasing the lowest price. It's how the system is meant to work. The problem has been introduced because of the bizarro world that exists in tech, where billions of dollars are showered on companies with only the vaguest hint of how that money will ever be made back. Where CEOs proudly state they have no desire, or intention, to turn a profit any time soon, or, indeed, really ever. Just like in the GFC, pouring billions of dollars into many investments that are not likely to make a decent return is probably going to end up in tears for all.
Post-Crisis Bizarro

Broken down to its simplest, for-profit companies are not complicated beasts. Share capital is put up in order to create a business, the business is created and profit is made, which is then returned to the shareholder or investor by means of dividends. Of course, money can also be made by capital gain on the shareholding which is realised by private market sales, at an IPO, or on the market sales once listed. But, in the long term, the model is only sustainable if the company is going to turn a profit, and turn the kind of profit which bears some resemblance to the amount of money invested in the first place. It's no use making a profit of $1 a year if it cost you $10 billion to build the business to make it happen.

This reasoning does not appear to apply in many tech companies.

Take Facebook. Facebook has turned a profit you say! Yes. However, the profit is a very small fraction of the billions poured into the company through its IPO, such that it will take a very, very long time, and a lot more profit, for Facebook's valuation to look at all sensible. Furthermore, Facebook was only able to get to this stage because of the billions that were poured into it by investors before it went public. Those investors made money, yes, but only because the value of their shareholding went up hugely. That increase in valuation, however, must still be predicated on the fact that the business will eventually make enough money to return that value. The jury is still very much out on whether Facebook can pull it off.

Ditto with companies like Twitter, Spotify and Amazon. These tech companies have a spigot of money attached to them, with far fewer strings on that money than most companies could ever dream of and, like almost all free money, this leads to perverse outcomes.

Content is one casualty of the free money. If a tech company has no real need to make money in order to justify its existence, and if investors keep writing cheques, the company can happily strip value out of whatever is the meat in its grist.

For example, Spotify charges no money, or a very small amount of money, to access almost all the music you could ever want to listen to. There is literally no reason to ever buy music again. It then takes some of the pathetic sum generated from advertising revenue and subscriptions and hands it on to the content creators, without whom it would not exist. It keeps some for itself, but of course not nearly enough to make its services profitable, but then this does not matter because its investors apparently don't care about the company ever turning a profit. Were Spotify actually profitable, and properly profitable, content creators could demand a bigger share of the pie and there would actually be a pie worth fighting over. Instead, being a pauper, Spotify simply does not have much to give. Its investors don't seem to care, but content creators do – they are left to fight over a small Lean Cuisine pie, which doesn't taste particularly good anyway.

This forces the value of the content down to near zero. Why don't content creators simply refuse to deal with Spotify? I'm sure they wish they didn't have to, but of course they have to try and get some money for their wares, because on the other side of Spotify is actual zero.

Isn't Spotify better than nothing then? Well no, because at least people know that the 'free' side is wrong. That may not stop all, but it will stop many. But now Spotify gives the stamp of legitimacy for paying almost nothing for content, and, as Elmo says in her article, conditions that to be the price of content. While iTunes stripped a lot of value out of content, $1 per song is still better than percentages of a cent per song.

A similar thing occurs with Amazon, as Matthew Yglesias of Slate points out. Amazon can afford to offer lower prices than everyone else because its investors apparently don't care that it makes virtually no profit. Amazon's share price continues to hold up despite CEO Jeff Bezos having shown no great desire to actually make any money for them. While this is very generous of Amazon's investors, to essentially pay the margin for goods that would otherwise be borne by the consumer, the end result of all this is that all the rest of the companies that don't have rich parents shovelling them free money are forced to also make virtually no profit and to squeeze their suppliers dry.

Something has got to give.

In the case of Amazon, maybe that's the plan: to drive everyone else out of business and then turn into a rent-seeking monopolist. But it’s a pretty ballsy plan, and one that relies on investors not to lose their nerve and withdraw their money early, having thrown in good money after bad and in the meantime putting almost everyone else out of business.

In short, despite the mystique, most tech companies leverage content created by others - whether it is Facebook and Twitter with content created by us, Spotify with content created by artists or Amazon with content created by almost every conceivable business in the world (you can buy light bulbs on there for goodness sake!). While some content is easily created for free (especially in the case of social media) other content unavoidably costs money to make. Unless or until the investors in tech companies start being more rational in their investments and in their demands of their companies (or alternatively stop looking to make billions on the flip), they could well allow their companies to kill the content on which they rely.

Business is business, and profit is the oil that makes all business run - tech is not special in that regard. When that requirement is removed in one link of the chain, it can cause havoc when all the other links in the chain are behaving in a self interested fashion. We consumers are like kids in a candy store, we're going to gorge on candy, even if we know its not good for us in the long run. So for goodness sake stop giving us cheap candy!

Image by ElDave, made available by Creative Commons licence via Flickr.

Tuesday, 19 February 2013

Digital Media and the, ahem, Business Model of the Future

Nicholas Sheppard

I worked for some years as a researcher in copyright protection technology, though my funding has long ended and I've since moved on. Digital copyright issues probably don't generate quite the fuss they did back in the hey-day of Napster, and this year I discovered that the ACM Workshop on Digital Rights Management — where I think some of the most interesting work in this field was presented back in its own hey-day — is no longer on the calendar. Does this subsidence indicate that issues of copyright and digital media have now been settled to everyone's satisfaction, or just that my former co-travellers in digital media and security have gone off to write about more current headlines, like Facebook's privacy policy?

One recommendation that I heard over and over again is the one that the music industry must combat infringement of its copyrights by "getting new business models." Ironically, perhaps, one of the original hopes for rights management technology was that it would enable new business models based on paradigms other than the exchange of physical copies, not usher in an era of confusing and inconvenient rules of use.

Retailers have, in fact, tried a number of different business models — possibly more than critics give them credit for — including subscription services like Rhapsody, ad-supported services like Spotify, "viral" services like PotatoSystem, and bundled-with-device services like Nokia's Comes with Music (now largely defunct). Well-known bands Nine Inch Nails and Radiohead even tried giving their music away for free or in return for a donation, though neither of them is doing it any longer.

By all accounts, though, the most successful retailer of digital music is Apple's iTunes, which charges a one-off fee for a recording to be kept and played as often as the buyer likes. Sounds rather like the old business model to me.

Might it be that music listeners — or the ones willing to pay for the pleasure, at least — are not as interested in new business models as would-be copyright reformers thought they would be? And did we go through all of that Napster-inspired anguish only to find ourselves doing exactly the same thing as before?

Not quite, obviously, since Rhapsody, Spotify and others do have customers — even if it's not so many as iTunes — and there may be factors other than business models contributing to iTunes' success. One certainly hopes that we've learned a thing or two from the experience.

The video industry, intially protected from file-sharing networks by the time it took to download a video around the turn of the century, is one that has had chance to learn from the experience of the music industry. The trend for copyright protection technology here has been towards so-called "rights locker" services like the Digital Entertainment Content Ecosystem's Ultraviolet and Disney's KeyChest, along with infringement-detection systems like YouTube's Content ID, rather than the copy-prevention technology that the software and music industries experimented with in times past.

A rights locker is, in essence, an Internet database that records a buyer's right to use a song, video or book. When the buyer wants to access the item, his or her device checks with the locker that its user has, indeed, purchased the right to use it. If well-designed and -implemented, rights lockers might eliminate some of the inconveniences that customers experienced with copy-prevention technologies, including incompatability, an inability to format-shift, and an inability to make back-ups. They also seem to fit nicely with the pay-once-for-eternal-usage model that we have become accustomed to.

Rights lockers, however, don't actually work very much like the books, CDs and DVDs that got us used to the pay-once model in the first place. Since the right to use something is governed by a record in a database rather than possession of a physical copy, it looks more like an "access right" than a "copy right".

How much does this matter? It certainly matters to lawyers, for whom an "access right" and a "copy right" could be quite different things (see Marcella Favale's analysis of EU law for a recent example). But will the average user continue to think that he or she owns something, even if it is an entry in a database rather than a physical book, CD or DVD? Or will the user get used to the idea that "this work is licensed, not sold", in the words of many a software agreement? And, if the latter, will he or she be more likely to explore alternative business models?