Friday 21 December 2012

A Universe of Data is Not Enough

Colin Picker

Humans have always recorded information (or data). From early cave drawings to Edison’s phonograph cylinders to the photos and music on I-phones, data recordation and storage seems to be a human attribute. But today we live in furious period of data storage. That data today includes pictures, video, music, documents and records of almost every type of human activity and thought (though, a very large percentage is, as has always been the case, pornographic – there are even pornographic cave paintings).

Today that information is increasingly stored at the electronic level. In the future we can expect almost all data to be stored electronically, and even sub-atomically (utilising the smallest constituent parts of the universe). While occasionally we record over past recordings, we more and more produce data that will be archived, eventually producing archives that will be able to last forever—or at least until the end of the universe (assuming there will be such an end, more on that below). As our technological needs increase, more and more data is needed, more and more is therefore going to be stored. But, is there an upper limit to the amount of data that can be stored? I don’t mean the limit on a hard drive, or a very large data storage array. I wonder whether there is a theoretical limit imposed by the very nature of the universe.

I first started to think about such an upper limit when considering the non-existence of infinity (more on that later, though admittedly an unusual thought experiment for a law academic). In any event, my ruminations took me to a place and time where we, humanity, had already moved to store our data at the quantum level, utilising the smallest sub-atomic components to represent the zeros and ones of data (assuming the correctness of quantum limitations). One quark, or whatever will at that time be the smallest unit, would represent one piece of data; another quark, or its specific absence (a non-quark), would represent another piece of data. But, if the universe is finite in size and composition, then there are a finite number of quarks available for use from the existing matter of the universe—including that used in the memory portion of our brains and that which can be converted from the various forms of energy in the universe. There is therefore a finite amount of data that can be stored on that finite number of quarks. True, utilization of that large capacity is a long way off, but it is, critically, a finite long way off. Furthermore, once imagined, it then exists—and that limitation has some very significant metaphysical consequences.

One consequence ties in with my original concern about infinity. One way to consider numbers is that they only exist if they can be represented (in our memory, on paper, as data, as cave drawings, etc). But if there is a data limit on the total representations of numbers, then there is a limit on those numbers. In other words, there is a finite number of numbers that can be expressed, and hence that can exist, a number limited by the data storage capacity of the universe. True, it is a large number, but it is a finitely large number. In other words: not infinite.

But back to the data storage issue. Perhaps the most important consequence is that eventually, when we do hit that data storage capacity, all new knowledge has to displace some of the previously recorded knowledge. Thus, while the composition of that knowledge may change, it can never exceed the total finite storage space. Once replaced, the data will then be lost forever (assuming no duplication, which we should assume, for until we have eliminated all excess duplications there really is no storage problem). While much that will be lost at first will be inane, eventually all the inane and frivolous pieces of data and knowledge will have been deleted to make way for more serious and important information. What happens then? We will need to be careful about the creation of new data (including new memories), for it will then require us to make hard choices about what other data must be erased to make room for the new data.

So, every time you download an “app”, create a new document, take a photo on your camera and then download them to your hard drive or into some data cloud or other, you are hastening the day when we run out of data, and hence limit our collective collection of new knowledge. Maybe, like fossil fuel conservation, we need to start thinking about data conservation – not for us, but for our children. A good start would be to delete this comment from your computer and then to forget all about it.

Thursday 13 December 2012

Cyborg Cops, Googlers and Connectivism

Alexander Hayes

We have become the camera and it has become us. (Aryani, 2012)

©Marco de Angelis

I rarely leave my mobile phone out of physical reach or indeed earshot and it is almost always powered on. It has become my compass, calculator, calendar and main communication channel with literally thousands of contacts in my networked cloud.

You might agree that this is not dissimilar to your own current relationship with this disruptive technology, your personal electronic portfolio. It might also occur to you, upon reflection, the profound impact this technology is now having upon your communications with family, friends and work colleagues. At a stretch you might even acknowledge that your cell-phone is "closer" to you that you ever imagined possible a decade ago, and thus is, in relative terms, wearable.


Project Glass is a research and development program by Google to develop an augmented reality head-mounted display (HMD). The intended purpose of Project Glass products is the hands free display of information currently available to most smartphone users, allowing for interaction with the Internet via natural language voice commands.

Whilst we might recoil aghast at Steve Mann’s predictions as to our wearable, portable and existential future, we must also acknowledge that this consumption of hyper-connectivity is simply yet another transformation in humanity. Given that Project Glass now connects wearers en-mass and ostensibly ensures that they can continue with physical activity hands-free, it creates arguably one of the largest known veillance vehicles into previously unmapped territories that humans already frequent. A hands-free, fashionable and constantly connected technology positions the product well amongst the seemingly unending array of Google's seamless and integrated services.

It is notable that Google's CEO Eric Schmidt is attributed with publicly dismissing privacy concerns as unimportant or as old fashioned according to Dwyer:"When companies sell information for a living, privacy is not their priority."

Irrespective of what challenges Google now faces around its users' privacy, it seems evident that this body-worn technology is set to revolutionize the manner in which we will interact with each other in the not too distant future and conversely how others will interact with that open and captured data thereafter.

At a recent presentation, I expressed my own feelings of unease at the roll-out of body wearable technologies across the Australian Police Force, where officers are conducting trials of location-enabled body-worn cameras and digital video recorders as part of law enforcement activities not unlike what is already fully deployed in the US and UK.

At this brief cross-sector meeting of minds, of surveillance studies experts, academics, law enforcement officers and private investigators, was also an equal proportion of actors, artists, educational technologists and technology service providers. What was apparent from what might sound to be a dissimilar array of roles and occupations at this workshop was a unified interest in what this technology now poses for the law enforcement officer, for the jury and ultimately for either the victim or perpetrator. It became also very apparent at this workshop that in a crowd-filled public, the seemingly innocuous role that a cell-phone is now poised to facilitate, is, in fact an emergent omniscient inverse sousveillance.

I also spoke to cases of the use of the location enabled body worn cameras in sports, medicine, health sciences, utility services, agriculture, manufacturing, engineering, construction and transport to name but a few of the areas where these technologies are being used in an international education and training context. In many of these cases the premise for deployment of these technologies is to build upon and improve existing work practices, selected by seemingly well informed and trusted technical experts, substantially guided by organisational policy and secure data management plans pursuant.

The interoperability between these location-aware body worn technologies now opens new domains of socio-ethical consideration as to the affects that an always-on network will have on humanity as a whole.

Educators will need to shift to a networked learning theory for the digital age, a connectivism [11] so profound the very architectures of participation are set to become only but a loosely bound accreditation arrangement.

"It is widely understood that the area of digital technologies in education covers education through digital technologies. However, it must also, crucially, encompass education about digital technologies, and particularly about their social, sociopolitical and ecological consequences." (Pegrum, 2009)

What is apparent is that the general public will now need to embrace change more rapidly than ever to accommodate a cyborg cop, a omnipresent jury and a recollection of events frame by frame.

Google's first "Glass Session", which demonstrates what it’s like to use Glass while it is built, follows Laetitia Gayno, the wife of a Googler, "as she shares her story of welcoming a new baby, capturing every smile, and showing her entire family back in France every “first” through Hangouts.” (Google+ post, 2012)

Our role has changed from a passive participant in an abstract recollection to a first-person perspective; where we have become the camera and it has become us, in essence a state of Uberveillance.


Image by De Angelis, Canadian Committee for World Press Freedom, Cartoons 2012.

This post is based on http://www.alexanderhayes.com/publications/2012-cyborg-cops-googlers-and-connectivism. For more from Alexander Hayes, please visit http://www.uberveillance.com. For information about the 2013 IEEE International Symposium on Technology and Society in Ontario, Canada in June 2013, please visit http://veillance.me/.

Monday 5 November 2012

Crowdsourcing a Constitution

Alana Maurushat with David Lee

When I worked at the University of Hong Kong, I had the privilege of engaging in many conversations with the world-renowned constitution-writer and scholar Professor Yash Gai. Professor Gai led constitutional reviews in Kenya and Fiji, and was asked to assist with Constitutions in Iraq and Afghanistan. Over many casual lunches with colleagues in Hong Kong, I can still recall how passionate Professor Gai was for constitutional writing that was “right” for the people of the country in question. He was a staunch believer of the idea that extensive discussion and consultation among all communities of a nation was essential for building a strong constitution that would stand the test of time: constitution writing by consensus. These constitutional reviews often involved Professor Gai and his committees to lead meetings throughout urban and remote areas of a nation. These consultations often lasted years, in order to ensure that small ethnic minorities were not neglected. The process was epic.

Given that a constitution is construed as one of the pillars of a nation’s identity, one might ask the question – why not ask the citizens to draft the constitution? With the rise in online user input platforms such as Twitter and Facebook, collaborative innovation has never been easier. It should come as no surprise that Facebook alone is used by nearly 12 million people just in Australia.

This increasing popularity of social media is exactly what the Nordic European nation of Iceland needed. Following collapse of its economy and outcry from its citizens, the Icelandic government has decided to take advantage of this method. The government introduced a process in 2011 involving a unique democratic approach of using social media such as Facebook and Twitter to identify ideas, recommendations, and provisions to be included in the new constitution. The social feedback will not be binding to the Parliament of Iceland, but it will most likely have significant influence on politicians.  Because the proposals are drafted by the public, it will be impossible for politicians to "sweep popular proposals under the carpet". Icelandic citizens are welcoming this idea too – 66% of the voters agreed in a referendum to use the resulting document as a framework for the nation’s new constitution. This unique drafting method adopted by Iceland is a prime example of "crowdsourcing".

First coined by Jeff Howe in an article posted on The Wired, the term "crowdsourcing" refers to a similar concept to outsourcing. Outsourcing involves an identified and selected individual or group of individuals developing a concept or performing work duties. Crowdsourcing is a much bigger idea – it brings in the public and involves the crowd in a creative, collaborative process. Many businesses have taken advantage of this method from as early as 2001. iStockPhoto was created as a marketplace for bloggers and web-designers to purchase stock images from a gallery of photos contributed by amateur photographers. The collaborative input provided by thousands of contributors allowed these images to be sold at very low prices, often undercutting professional photographers by as much as 99%. Other notable businesses benefiting from crowdsourcing include Reddit, Youtube and Innocentive.

Crowdsourcing through social media creates exciting opportunities, as it empowers people to participate in a true democratic process. Evidently, this method has been utilised mainly by businesses for financial gains. As such, Iceland must be commended for taking the unprecedented approach of employing crowdsourcing in politics, in an effort to produce a constitution that is “right” for its citizens. Other nations will undoubtedly take note; it won’t be long before other governments follow the unique path created by Iceland. It is arguable that the constitutions of other nations are long over-due for a reformulation, with netizen contribution.

For example, the Australian Constitution was drafted by the delegates of the States in the late 19th Century, and the only input provided by the people was voting for its adoption. However, this is a debate for another time.

    Image by James Cridland, made available by creative commons license via Flickr.

Friday 24 August 2012

Living with our heads in the Cloud

Hadeel Al-Alosi

Technology has led to rapid advancements in our society.  While reading this, many of us will probably be scrolling through a Facebook page or flicking through an iPhone.  Much of the data we are accessing may well be stored in the Cloud.

At its broadest level, cloud computing is the provision of computing resources as a service over a network, usually, the Internet. Cloud computing services have been made available for a number of years, including by well-known organisations such as Google, Microsoft and Hotmail.  These services allow consumers to access data and applications without having to install or store these on their personal computers.

The personal cloud promises many benefits. It allows you to manage all of your PC and mobile devices, and to have every piece of data you need at your fingertips, so that you can share your information with friends, family and colleagues in an instant.

But before becoming over-excited by all the benefits that cloud computing promises to deliver, there are important issues to consider.

Theft and loss of data: should cloud service providers be bound by some minimum security standards that ensure personal information is not lost or stolen? Should service providers be able to limit their liability contractually for lost or stolen data? What if the service provider is forced to close down due to financial or legal problems, which causes customers to lose their data? Who should be responsible in having back-up and recovery processes in place?

Data location: the fact that data is stored by a cloud provider, which may be located overseas, means that individuals and businesses have less control over their data. Users should be questioning who is actually holding their data and where it is being located. With the growth in reliance by Australians on cloud computing services, it may be worth choosing a provider based in Australia. This would reduce risks in storing data with overseas providers, which may be in countries that have inadequate privacy laws or are prone to natural disasters.

Privacy issues: there are endless privacy issues raised by cloud computing, such as who will have access to your data and whether (and which) privacy laws will apply. Are there circumstances that justify the disclosure of data (for example, to aid law enforcement)? Also, what happens to data once a contract with a cloud service provider is terminated? For example, Google Docs states that it “permanently deletes” data from its system. However, it also warns that “residual copies of your files and other information may remain in our services for three weeks”.

Most individuals and some businesses overlook these important issues. As is often the case with e-commerce transactions, many people blindly click on the “I agree” button when signing up for services without reading the terms and conditions provided. We tend to think more about these issues when something goes wrong. For example, when someone's Facebook account has been hacked into by a revengeful ex-partner, or when precious data has been lost.

As to the future of cloud computing services, I think it is timely that we generate some solutions to these problems. Perhaps, somewhere over the rainbow, we can find solutions that allow us to reap the benefits of the cloud, while ensuring we are protected from all external threats.

So, what do you think? – is cloud computing a threat or an opportunity?

Friday 3 August 2012

A Mobile Phone, Amid the Darkness

David Larish

I just read Amy Spira’s post on this website, “What we lost when we gained the light bulb”, 18 November 2011, in which she detailed the sadness of Nicaraguan townspeople at the prospect of electricity darkening their lives. I want to share a similar experience from my time in Kenya in 2010 but from an altogether different perspective.

I was working at Olmaroroi Primary School, which consisted of a series of sheds haphazardly constructed on dusty, red dirt in Maasai territory in the Rift Valley. The nearest town, Ngong, was a bumpy, 45 minute motorcycle ride away. I stayed with a local family of fourteen, including two wives. They lived in mud brick huts, used a hole in the ground as a toilet and, in the absence of electricity, burned wood for cooking and lit candles when the sun set. There was no running water. The nearest source of it was the communal well at the school, a ten minute walk.

Like Amy, I found myself as far away from technology as I had ever been.

This, in my mind, was a good thing. On my first night, after the older children had finished looking after the cows and goats for the day and after the younger ones had returned home from school, the family gathered in the kitchen, drinking tea, cooking dinner, eating together and then chatting into the night in semi-darkness. I contrasted this with a Western childhood of the Noughties – spending the afternoon on the phone to friends while commentating on the video games I was playing, watching TV during dinner, rushing back to the computer in my bedroom to go on MSN – and I was envious. What I had when I grew up meant that there were a lot of things that I did not have.

The bliss I was experiencing that night was punctured by the shrill beep of a text message which, to my immediate relief, did not sound as if it had come from my phone. In fact, there was confusion as to whose phone it had come from because, as it later emerged, each of the children aged over 13 had one.

My initial thought was that convincing a family who lived without running water or electricity of their need to own multiple mobile phones must have taken some phenomenally effective marketing on the part of the then major Kenyan mobile phone companies, Safaricom and Zain. In fact, these companies had even implemented a system whereby you could buy phone credit and transfer it to loved ones, family or friends (imagine that: ‘happy birthday my brother – here’s enough credit to call me on my birthday’).

I felt that this was a clear instance of these companies exploiting the technologically-starry-eyed family by enticing them to spend the limited money they had on things that they did not need. This view was reinforced when I later became aware that a family member was required every few days to make a trip into Ngong in order to charge a half dozen or so battery-depleted mobile phones at the “electricity shop” that had opportunistically sprung up to service this niche.

I was also concerned that the special traditions held by the family and the atmosphere when the family came together would be eroded by the mobile phone, which I saw as a gateway – both symbolically and practically – to the spectre of other technologies spreading into their lives.

One night towards the end of my stay, I (subtly) raised these issues with those members of the family who were old enough not to have received a mobile phone when they had reached puberty. As they pointed out, I had failed to see the benefits the mobile phone had brought to the togetherness of the family. The family was now able to stay in touch with family members who had moved away for school or work. It was easier for the family to make arrangements for everyone to be in the one place. By keeping in contact with past volunteers who had returned home, the family would reminisce together.

I still have mixed feelings about the impact of the mobile phone on the family, but I now see it in a more balanced light than I first did. In hindsight, it was difficult for me to dissociate my anxiety about having too much technology in my life from my views. I now think that the mobile phone is far less of a threat to the family’s connection and values than the computer, iPod or television – which are a while away yet.

But if I want to know if any of their attitudes have changed, I’ll just ask them next time I Skype their mobiles.


Image by Charles Crosbie, made available by Creative Commons licence via Flickr.

Monday 18 June 2012

The smartphone and tablet patent wars

David Larish

During the Cold War, the US and the Soviet engaged in a form of military-industrial battle, the so-called Arms Race. As the stockpile of nuclear weaponry on both sides increased, the world – paradoxically – became a safer place. This occurred, most historians agree, as a result of a concept that came to be known as Mutually Assured Destruction (MAD). Each superpower was deterred from hitting the other with a nuclear attack because they realised that this would inevitably lead to the opponent unleashing a nuclear attack on a similar scale. An attack on the other was, in effect, an attack on oneself.

A few decades on, the world’s leading smartphone/tablet manufacturers have been engaging in a race of their own. The weapon of choice is the patent. Manufacturers have been stockpiling them for offensive and defensive reasons. That is nothing new. However, until April 2011, a force resembling MAD seemed to be preventing worldwide mass smartphone/tablet patent infringement lawsuits. Then Apple launched its first strike on Samsung’s Galaxy Tab and Samsung retaliated on Apple’s iPhone and iPad, resulting in numerous and as yet unresolved global skirmishes between the telecommunications behemoths.

The rationale for the patent system is based on promoting innovation; you are less likely to throw time, resources and dollars at innovating if others can piggy-back on whatever you create with impunity. Leaving aside the debate about the merits of the patent system in general, it is clear that, when it comes to the smartphone/tablet wars, the patent system is struggling to cope.

In conventional patent litigation, the party asserting infringement generally confronts the alleged infringer with one (or a few) patent(s), albeit often asserting infringement of multiple claims within each patent. The distinction with tablet/smartphone patent litigation is that the manufacturers have literally hundreds of patents in their stockpiles: method of unlocking the device, method of scrolling within the device etc. When contemplating litigation, these companies have vast possibilities from which to choose and are therefore able to assert patent infringement in respect of numerous independent patents. It only takes a finding of infringement in respect of one of these patents for a rival smartphone/tablet to be removed (in all likelihood) from the market. Based on sheer weight of numbers and probability alone, the odds are stacked in favour of the company asserting infringement. Throw enough mud at the wall and some of it will stick.

Let’s use Game Theory to examine the relationship between Companies A and B, two successful competitors in the smartphone/tablet market. Company A may face two different scenarios: (1) Company B has sued Company A for infringement of its patents; or (2) Company B has not sued. In either case (leaving aside transaction/legal costs), Company A is in a better position by opting to sue Company B for infringement of Company A’s patents than by opting not to. Under (1), bringing an infringement action against Company B is necessary as a defensive mechanism – the return of fire to Company B gives Company A some clout at the negotiating table (ie MAD). Under (2), offensive action restricting Company B’s smartphone/tablet from operating on the market would, if successful, harm a competitor and increase Company A’s market share.

The best position overall for Companies A and B involves neither suing the other. This is because there is no threat of the products’ release into the marketplace being restrained and the time, inconvenience and expense of litigation are avoided. The concern, however, is that given:

• the fiercely competitive nature of the smartphone/tablet market;

• the diversity of the patent stockpile at the disposal of the smartphone/tablet manufacturers; and

• the possibility that, since April 2011, MAD is no longer effective in operating as a deterrent to litigation;

tablet/smartphone patent litigation will, in future, become the norm rather than the exception.

This would be disastrous for retailers and consumers. The tablet/smartphone market would suffer from less competition, greater uncertainty and the absence of products which consumers wanted. Most worryingly, the outcome by which the patent system is justified and by which the disadvantages associated with it are tolerated – the incentive to innovate – would actually be counteracted. Yes, in the final outcome, genuine smartphone/tablet innovations should not fall foul of the patent system. However, in a practical sense, when enough mud is thrown there is a strong chance that they will, at least at some point in the litigation process. And, even if they do not, the shifting of resources away from product development and towards courtroom battles, the delays in the release of products and the helplessness of small stockpile tablet/smartphone manufacturers when faced with legal threats against their products from large stockpile tablet/smartphone manufacturers would all be considerable impediments to innovation.

Watch this space. The situation needs to be closely monitored.

Thursday 17 May 2012

EVENT TONIGHT: Videogames as Telehealth Technology

Speaker: Stuart Smith

Host: IEEE-SSIT
Time: Thursday 17 May 2012, 6pm for 6:15pm start

Location: John Goodsell Building Room LG 19 Parking station at Botany St Gate 11, University of New South Wales Kensington Campus

Cost: Free. Public welcome

RSVP: Lyria Bennett Moses (lyria@unsw.edu.au)
 
About the event
 
Declines in physical or cognitive function are associated with age-related impairments to overall health. Functional impairment resulting from injury or disease contribute to parallel declines in self-confidence, social interactions and community involvement. Fear of a major incident such as a stroke or a bone-breaking fall can lead to the decision to move into a supported environment which can be viewed as a major step in the loss of independence and quality of life. Novel use of videogame console technologies are beginning to be explored as a commercially available means for delivering training and rehabilitation programs to older adults in their own homes. We provide an overview of the main videogame console systems (Wii, Playstation and Xbox) and discuss some use case scenarios for rehabilitation, assessment and training of functional ability in older adults or those living with a disability.

About the speaker

Dr Stuart Smith is an NHMRC Career Development Award-Industry researcher with a particular interest in the application of technologies such as video games and the internet for home-based monitoring of health.


He was involved in establishing the Technology Research for Independent Living Centre in Ireland which developed technologies to monitor the health of older adults to facilitate their continued independent and healthy living.

He currently chairs the working group on Games for Health within the Health Informatics Society of Australia, whose aim is to establish connections between health researchers and video game developers and manufacturers to develop games that are appropriate for patient rehabilitation.

Dr Smith has secured NHMRC funding to develop video games for reducing fall risk in older adults and is a PI on Dr Penelope McNulty’s NHMRC project investigating the use of the Nintendo Wii in rehabilitation of upper limb function following stroke. He is also involved in pilot trials assessing the effect of video game play in rehabilitation of stroke and spinal cord injury patients at the Prince of Wales Hospital.

Dr Smith has recently had a manuscript accepted by the British Journal of Sports Medicine on his modification of the ‘Dance Dance Revolution’ video game for step training in older adults. He has two recent book chapters on the application of video gaming technologies to rehabilitation and has organised workshops on Games for Health at international conferences.

Recently Dr Smith contributed to a successful bid for funding from the Federal Department of Education, Employment and Workplace Relations to build video games that are specifically targeted at health.
 
About the sponsor
 
The IEEE is a voluntary organisation with more than 350,000 members. The SSIT has about 2000 members in 56 countries worldwide and growing. The Society focuses on the impact of technology on society, including both positive and negative effects, the impact of society on the engineering profession, the history of the societal aspects of electrotechnology, and professional, social and economic responsibility in the practice of engineering and its related technology.
 
SSIT publishes a quarterly journal, IEEE Technology & Society magazine (free with your Membership).
 
SSIT can be contacted at ssit.australia@ieee.org.

Thursday 10 May 2012

Recycled music in the digital era

Adrian McGruther

I remember spending countless hours after school, rummaging painstakingly through the ‘new arrivals’ bin of my local second-hand CD store. My meagre income as a suburban paperboy meant the new release section at Brashs Music was well out of my reach (unless I was content settling for a Jason Donovan single in the bargain bin). Having whittled a crate's worth of CDs down to a shortlist of five or six, I was left with the painful decision of which two or three were truly worth shelling out for. Upon arriving home, broke but beaming, I’d invariably discover that one of my new treasures had a deep, long scratch across its surface, right in the middle of a blistering Kirk Hammett guitar solo. Bummer. But, you get what you pay for, I’d remind myself.

Had I gone to school during the digital age, I might’ve turned to a new US-based service, ReDigi, which offers ‘second-hand’ mp3s for sale online.

What is ReDigi?

ReDigi describes its offering as ‘recycled digital media’, but with the benefit that, unlike physical media, its products never scratch or wear out. Users who wish to sell digital music files that they no longer want can ‘upload’ the tracks to ReDigi’s server for other users to purchase and download. ReDigi claims to have what it calls ‘verification’ and ‘hand off’ technology, which ensures that the digital music file is from a legitimate source and that any additional copies of a sold file are also deleted from the user’s computer.

If a copy of a file that has already been sold reappears on a seller’s computer or synced device, and the seller does not delete it after receiving notice from ReDigi, the seller’s account with ReDigi may be suspended or terminated. ReDigi also pays a percentage of sales to the relevant artists and labels. ReDigi is different from file-sharing sites in that each track offered for sale is a unique, identifiable file, and has not been cloned from a master file.

Is it legally legit?

That’s the big question at the moment. Many record labels and industry bodies are casting a raised eyebrow in ReDigi’s direction because the service treads upon a legal grey patch. The way digital music sales normally operate is that when a customer purchases a song, a reproduction of the ‘master’ file is made, which requires a licence from the label or artist.

The Recording Industry Association of America (RIAA) and international record label EMI Music have objected to the legality of the service on the basis that ReDigi is infringing copyright when a 'copy' of the track is made as it is uploaded to ReDigi’s servers. They claim that this copying has not been done with a licence, irrespective of the fact that the original file is removed from the user’s computer once it has been uploaded to ReDigi.

Google has also weighed in on the legal debate by suggesting that a finding against ReDigi could potentially place the legality of cloud computing under…well, a grey cloud.

But in the midst of the current legal stoush, the short-sighted labels appear to be missing the elephant in the room: consumers are willing to pay for music. In an era when music piracy is rampant and labels desperately scramble to give users a commercial incentive to pay for music, the success of a service like ReDigi should be seen as a silver lining.

What does this mean for music lovers and music labels?

Legal hurdles aside, services like ReDigi provide a compromise between the mainstream digital music stores and the illegal (and unreliable) file sharing sites. As songs on digital music stores in Australia now nudge upwards of $2 each, it is unsurprising that consumers are turning to alternative sources.

Though ReDigi shows promising early signs, it is still difficult to assess its potential popularity with music fans. On one hand, the lower price point may be enough to persuade the teetering, borderline 'pirates' to start paying for music. But, humans are creatures of habit, and convincing someone who perceives little value in digital music that they should all-of-a-sudden pay for music, might require some pretty strong arm-twisting. Nevertheless, the concept of second-hand digital music might serve as an acceptable entry-point for those who don’t currently take part in the legitimate music market.

On the other hand, retail consumers rely on trust and seek consistency. One-stop-shops, like iTunes or Amazon (which never 'run out of stock') offer the reliability and consistency that consumers will want. The seamless shopping experience and interactivity offered by the major players is unlikely to be replicated by ReDigi. But ultimately, that is something that will depend on how widely ReDigi is adopted, and the depth of its repertoire.

Would I use it?

Maybe. If I’m confident that I’m not breaking the law, that the file will be compatible with my devices, and if it’s well-priced, then I don’t see why not. But a lot will come down to the user experience. If I have to spend hours on end refreshing the site, trawling for that one pesky Jason Donovan track, then I’m better off trudging down to my local second-hand CD store and putting up with those darn scratches.


Monday 16 April 2012

Can My Facebook Photos Negate My Good Fame and Character?

Dr Catherine Bond

Teachers of legal ethics are to some extent used to the unusual questions that arise in classes on procedures and policies for admission to practice as a solicitor or barrister. In many instances this class will be a student’s first exposure to what happens post-law school and the requirements that the student be both eligible to be admitted (having previously completed the requisite academic qualifications and practical legal training) and are suitable to be admitted, on the basis that he or she is a ‘fit and proper person’. A fit and proper person is defined to include a person of good fame and character, who is not insolvent, has not previously practised in Australia or overseas without a practising certificate, or who has not previously committed an offence. Perhaps understandably, when students become aware of these rules, closets full of skeletons past begin to open and nervous students begin to question whether this or that incident could have an impact on his or her admission to practise law.

Great emphasis has been placed in New South Wales and more generally on the act of disclosure: that an applicant disclose any prior or current behaviour that may negate their good fame and character, ranging in activities from receiving a transport fine to a finding of plagiarism while at university. The forms that potential solicitors must complete are geared towards this act of disclosure, containing a number of general conduct statements that, if one is not true about the applicant, requires the applicant to ‘strike out’ and disclose the circumstances as to why that statement may not be true. The consequences of a failure to disclose can often lead to a decision by the Legal Profession Admission Board to not admit an applicant or, if the failure to disclose is found following admission, to be struck off from legal practice.

In a recent class a discussion arose as to what impact the existence of photos on Facebook may have on an applicant’s good fame and character. The debate follows a recent flurry of reports in the media of employers asking for the username and password of potential employee’s Facebook accounts as part of a virtual ‘background check’. In turn, Facebook has advised its members not to disclose such information. The student’s question was therefore quite topical: if employers are interested in what is on a potential employee’s Facebook page, then surely the Legal Profession Admission Board might be, particularly given that many individuals have photos depicting events and other information available via that social networking site that may ultimately negate their ‘good fame and character’?

Public embarrassment from Facebook photos is not a new phenomenon; Australia’s ‘public figures’ have in the past had photos posted either by themselves or their ‘Facebook friends’ published in the media. In 2008 a number of provocative photos of Olympic gold medallist Stephanie Rice that appeared on Facebook were subsequently published in a number of Australia’s major newspapers, tarnishing both the public ‘golden girl’ image of Rice and also her then-boyfriend, fellow Olympic swimmer Eamon Sullivan. Rice’s subsequent 2010 experiences with Twitter, which culminated in a teary press conference where she publicly apologised for her offensive tweet, further indicate the damage that an over-exuberant use of social media can cause.

Yet it is becoming difficult to avoid social networking if students want to keep informed about events going on in law schools, universities and law firms, with an increasing number of public and private organisations either creating Facebook pages or Twitter feeds to notify interested parties of news, legal updates and events. In England the UK Supreme Court has an official Twitter feed where the release of decisions are posted, questions answered and job opportunities with the court listed. Indeed, it is likely that, with the greater proliferation of both Generation Y and the ‘digital generation’ into the workforce, this trend will both continue and grow. Thus, on the one hand, social networks are a valuable source of information for students, but on the other, they have become areas where students may not use these sites for their primary purpose – ‘networking’ and connecting with friends – for fear that their activities may be accessed by potential employers or ultimately affect admission to legal practice.

It appears that today’s students must find a balance between a fleeting moment that may have affected their ‘good fame and character’ and the permanent digital capture of that moment on Facebook. In any event, we may be moving towards a system where potential solicitors have to disclose what is on their Facebook pages.

Tuesday 10 April 2012

The Most Secure SmartPhone?

Alana Maurushat and David Frew

With each new technological development or release of a new product comes the often-not-thought-about question, “Is this technology secure?” Most of us are quick to notice the price then we dive straight into the fascinating world of “what new things does my new gadget do?” Companies rush to deliver products jam packed with applications and attributes in order to meet the Christmas rush. Security, while part of the process, does not play a significant role in hardware and software development. This begs the question, which smartphone is the most secure?

This is an easy question to answer – whatever smartphone has the least amount of market share. Why? Criminals are drawn to technologies with maximum customers. The black market exploits and targets the companies who dominate the market. Market research by the NPD Group, Inc. suggests that Microsoft smartphone operating system has the smallest share, accounting for 2% of smartphone sales since launch. Apple’s iPhone follows at 29% of the market with Google’s Android leading the market with 53% of the total market. Microsoft is the safest smartphone – in attracting the least attention of the black market – because it is the least popular.

Malicious applications are developing quickly to take advantage of the smartphone market. There are some security features of both the iPhone and Android that are worth considering. iPhone “apps” downloaded from the App Store must first be vetted by the Apple security team; though this process is by no means foolproof. Android, on the other hand, does not vet any of its apps, only removing insecure and malicious apps once they are discovered. This does not, however, mean that the iPhone as a base product is necessarily safer than the Android.

Most smartphones run on a 3G or 4G system. These systems were designed with some security in mind. The typical 3G network allows for User Equiptment (UE) to ensure the connection is to an intended network rather than an impersonator. There is also the use of a block cipher to ensure encryption of data. In most Australian cities there is excellent 3G and increasingly 4G coverage. In more remote areas, however, there is only 2G coverage. The 2G coverage is extremely insecure as it was not developed with any security mechanisms in place. This makes any smartphone running on a 2G network susceptible to message interception and all sorts of cybercrime. Most smartphones automatically will look for 2G coverage when no 3G or 4G is available. The Android allows the user to set its default so that it will not connect to 2G coverage if a 3G or 4G network is unavailable. The iPhone does not offer this setting. Thus the user cannot instruct an iPhone not to switch to 2G coverage which, in turn, may expose iPhone users to cybercrime.

In recent times, there has also been attention paid to the efforts of a variety of security experts in exposing alternative vulnerabilities of the Android system, though it would also be possible to exploid such vulnerabilities on Apple’s iOS. Though the Android security breach was extremely expensive (US$15,000 in software and development) it also relied upon the complete trust and lack of awareness of the greater smartphone-using population. Whilst Apple, Google and Microsoft will do everything in their power to protect their smartphones from unauthorised access, there is little they can do to prevent users from personally authorising malware. Ironically, this method of breach is both the most potent and the easiest to prevent as it simply involves educating users to be savvy when links are sent their phones via text, particularly from unrecognised numbers.

The jury is hung: both the iPhone and Android command control of the smartphone market and both have features which allow, if not altogether encourage, cybercrime. So if safety is your ultimate concern, head for the Microsoft smartphone.

 
Image by William Hook, made available by Creative Commons licence via Flickr.

Friday 9 March 2012

Bridging the divide, over distance and time

Sophia Christou

One of the key goals of the Australian Government’s National Digital Economy Strategy is to increase levels of digital engagement in regional areas, and to narrow the digital divide between regional and metropolitan communities and businesses. Rollout and take-up of the National Broadband Network (NBN) and the opportunities it presents – according to the policy, to increase access to infrastructure and services, ultimately raising productivity across regions – is seen as one means of achieving this.

Amidst heated debate over the politics, costs and outcomes associated with the Government’s NBN policy, it is worth reflecting upon some of the motivations underlying this emphasis on regional access to technology. Concerns about equitable access to services and information, national development goals and maintaining connections with regional life in the midst of technological change are anything but new.

During the 1920s, the new medium of radio was allowed to develop as an experimental technology largely in the absence of state oversight. Over the course of the decade, Australian politicians of all colours gradually recast the medium as one with great potential for assisting national progress, keeping the country’s small, widely-spread population informed and connected. Relying upon the constitutional grant of power in respect of communication technologies such as telephones and telegraphs, the Bruce Government (National/Country Party coalition) pressed ahead in the late 1920s in regulating the expansion of radio infrastructure and overseeing licensing systems for radio stations.

General Electric radio, circa 1952
One of the foremost reasons presented by the Government for establishing a national public radio service was the continuing neglect of many regional areas by early commercial radio stations. Regional population levels meant that broadcasting as a commercial undertaking in some of these areas was not financially viable. As part of the solution, the Australian Broadcasting Commission (ABC) was established by statute in 1932 as a national radio service, funded by public money and with responsibilities for broadcasting information and entertainment that would be of value to all audiences, regional as well as urban.

Efforts to maintain access to technology and information for regional audiences were not limited to government. We see this reflected in the business practices of audience survey firms that compiled ratings data – first for radio, and later, television. Two major firms dominated the Australian ratings business up until the 1970s – McNair and Anderson Analysis.

George Anderson recognised the importance of the ratings results particularly for small regional television stations serving local viewers, despite the challenges and costs often involved when surveying regional audiences. These types of services were a source of up-to-date information and entertainment for their communities, but because they were still essentially commercial undertakings, their continued existence relied on convincing station owners and advertisers of financial viability. In these cases, ratings data was not just a business service for station operators and advertisers; Anderson took the view that the integrity and accuracy of his service could play a part in representing the interests of regional audiences in an industry that concentrated mostly on metropolitan audience preferences.

Whether we are looking back at the earliest days of broadcasting, or forward to the digital economy goals of the current Government, we find an enduring interest in promoting the engagement and visibility of regional populations in media and digital landscapes. Arguably, this means more than just connecting regional populations to information and entertainment created for urban users. The connection moves in both directions. The need to maintain a collective consciousness of regional life seems to take on greater significance when technological advances – radio, television, digital media and ecommerce – threaten a greater separation between the reality of a largely metropolitan population and a service-based economy, and how we would like to remember or imagine ourselves to be.

From a pragmatic point of view, equitable access to digital infrastructure and services is of course fundamental to the national interest in economic growth and maintaining standards of living in both regional and metropolitan areas. It might also be said that ongoing efforts to promote regional access to technology are about more than just the interests of regional populations. Drawing attention to these interests, and more importantly, encouraging the visibility of regional life through local media forms and digital services, is one way of maintaining identification with the iconography and nostalgia associated with country Australia at a national level.


Image by Fernando Candeias, made available by Creative Commons licence via Flickr.

Monday 20 February 2012

@Courtroomjunkie: Leave your phone at home!

Fatimah Omari

A young man recently had the audacity to steal a police officer’s hat from a Sydney courtroom. To the embarrassment of the thief, CCTV footage showed him looking up at the cameras seconds before committing the crime. Were it not for the CCTV cameras installed in the courtroom, the Police would have been at a loss to explain how a $150 hat could suddenly vanish into thin air. So what could possibly motivate the brazen young thief? The man, a part time dancer, sought a genuine police hat to add an element of reality to his dance ensemble. The magistrate did not share the same zeal for costume authenticity and described the crime as ‘stupidity at its highest’, placing the man on a two year good behaviour bond.

This story got me thinking: what impact do we have on the administration of justice when we bring our own technology into a courtroom? In a world of iPods, iPhones and iPads, we have clearly become addicted to a drug called technology and consumed by one mantra: iCan’tLiveWithoutIt. While the judiciary is embracing the shift towards sophisticated electronic courtrooms, many judges remain somewhat hostile towards the use of electronics by members of the public. The capacity of modern mobile phones and laptops to covertly capture sound and video or to instantly transmit information across the globe at the touch of a finger is proving to be a challenge for courts and judges.

Restrictions on the use of technology by members of the public are increasingly being introduced to avoid unnecessary interruptions to court proceedings and to protect the identities of witnesses and jurors.

A young Sydney woman recently discovered that justice is swift for those who flout the rules. The woman in question was charged with contempt after her inner photographer came out to play. She had heard through the grapevine that a family friend was serving on a jury and, to mark what she believed to be a notable occasion, the woman took a photo of the courtroom and several jurors’ faces. In a world of tweets and tumblrs, such images can be mass-broadcast, edited, tagged, discussed, re-tweeted and blogged in a matter of minutes.

This woman insisted that she attended court with good intentions and for the purpose of satisfying her curiosity of the Australian legal system. The judge handed down a slap on the wrist and released her without conviction. In contrast, a UK judge recently sentenced a man to two months in prison in order to send a simple message to the public: photography in the courtroom will not be tolerated. Imagine the impact on a closed session of court if a reckless Gen Y juror tweeted a blow-by-blow account of proceedings.

It may be obvious to some that the taking of photos, capturing video or recording speech and sounds in a courtroom is a no-no. However, the cases mentioned above are a sign of the times and reflect the impact of the technology revolution on human behaviour. It has become commonplace for a person to pull out their phone in response to anything mildly photogenic, so it should come as no surprise that the knee-jerk reaction of one woman, who was excited to see a familiar face in the jury, was to take a photo. The use of camera phones to capture and instantly circulate weird and wonderful images has become popular, particularly amongst younger generations. With every moment now being regarded as a Kodak one, the photographer feels compelled to share with masses of digital friends and random acquaintances.

Of course mobile phones and cameras are not the only devices capable of frustrating judges and court officers. When I worked as a paralegal on a case involving terrorism charges, I witnessed the transformation of the Sydney West Trial Court into a fortress. Dual security checkpoints at the entrance to the complex and the courtroom made me feel like I was passing through stringent airport security. Since the trial concerned matters of national security, all recording-enabled devices had to be surrendered prior to entry into the courtroom. Separation anxiety ran high.

The intimidating routine of being scanned with a wand, having bags checked and handing over phones and laptops quickly became annoying for paralegals and regular visitors. However, there was no denying that electronics were a potential security risk given their diminutive size and ubiquitous nature. According to a court officer, confiscation of my iPod was necessary as (with a small attachment) it is able to record sound.

The technology revolution has proved to be a double edged sword. With respect to courtrooms, the risk lies not only in the ability to discreetly photograph or record sensitive material, but also the ability to instantly transmit this data. Fortunately, such violations of court rules are minimal and, for the majority of people, common sense prevails over a desire to share images taken inside the Supreme Court.

 
Image courtesy of Robin Hutton, made available by creative commons licence via Flickr.

Tuesday 24 January 2012

Draining the Heart of the Internet: Is targeted marketing destroying the surfing experience?

Izzy Woods

The saying “just because you're paranoid doesn’t mean they aren’t out to get you” can be directly applied to Internet marketing these days. Targeted ads, often courtesy of Google AdWords and AdSense, appear on just about every page we visit online. Type an email to your mother and mention the carrots you ate the night before, and suddenly every sidebar and header ad is about carrots, cooking, or where to find cooked carrots. Casually type a comment about babies while responding to a friend’s Facebook comment, and almost instantly your entire Internet experience will involve baby-centric banner advertisements and notices. While Adwords and search engine marketing techniques have exponentially increased the efficiency of marketing and the reach of products and brands, and while most consumers have become accustomed to this rapid response to the minutiae in our heads, the practice is still mildly unsettling.

Back in the day

It all began with pay-per-click advertising in the late 90s. The ability of advertisers to place their ads where a potential customer could easily select the option to visit their site or product proved to be quite successful for all involved. In the late 90s, the average consumer was still wildly enamoured with the very fact that “surfing” the net, moving easily from place to place, was an option, so having a pay-per-click ad or two appear seemed novel. The fact that the ad was targeted was interesting as opposed to annoying. Google Adwords, launched in 2000, was the harbinger of developments to come. The Adwords program is now an integral part of our online experience. In a nutshell, the service, and others like it, allows advertisers to choose a series of keywords that will activate ads to appear in sidebars, banners, and pop-ups. The advertiser pays a specified amount per click by a potential consumer. The more money the advertiser is willing to bid per click, combined with the amount of traffic their site receives and the quality of the site itself, determines how prominently the ad is displayed on the page. Most often, it is Google Adwords that causes that one line about carrots to equal banner ads about vegetables for the next hour, or the appearance of advertisements about Gerber and Pampers that dot the electronic landscape once you leave Facebook.

This instant advertising gratification is further compounded by the use of more basic search engine marketing techniques, like keyword analysis, link and page popularity, back-end techniques like image tagging, contextual advertising, and paid inclusion in website directories, to name just a few. The result is that surfing the Internet now means navigating a series of advertisements, in addition to searching for the information you actually need. In the marketing community’s quest to generate more traffic and increase visibility, advertising has become almost as prevalent as general information on the web, and often the two are conflated in tricky ways that make it difficult to separate fact from a possibly manufactured version of the truth.

The real story

This is where “conspiracy theories” come into play. With targeted advertising and, recently, with the development of targeted links to content as well, our Internet experience is becoming more and more confined to those mentions of carrots and babies. The true “surfing” experience is no more. There was a time when it was possible to make cognitive leaps on the Internet that were solely based on where our brains chose to go, as opposed to where an advertiser nudged us to go. Those days seem long gone now. Consumer groups have been grumbling since the advent of paid search advertising. They point to the way advertising is presented on the Internet, and how some ads are presented in such a way that the very fact that they are advertisements is hidden. Pay-per-click advertising also opened up a whole new level of online trademark infringement, as companies raced to gain possession of common or popular terms, so that their ads would appear more prominently. In fact, in 2011, Google began preventing Adwords clients from buying up other links in order to increase their ranking. The end result is that the World Wide Web is becoming increasingly limited to only what exists in the user’s immediate world. Half the fun of going online used to be stumbling across a website or finding a new band from overseas. Now, the use of targeted technologies is drastically limiting our experience and, consequently, it is also limiting the possibility of connection or discussion with others that used to be the heart of the Internet.