Wednesday 23 November 2011

The demise of 'dot-comming'?

Joel Barrett

The Internet, as we know it, is about to change forever.

At least that's what ICANN, the Internet Corporation for Assigned Names and Numbers, would have us believe. That's because on 20 June 2011, after years of board meetings, stakeholder submissions and fine-tuning, ICANN finally approved the introduction of a program to allow an infinite number of gTLDs onto the Internet.

A gTLD (which stands for Generic Top Level Domain) is the Internet extension that comes immediately after a domain name. gTLDs are best explained by way of example: .com, .net and .org are the most common ones, while .info, .biz and .pro are a little more obscure. gTLDs should be distinguished from ccTLDs, which are Country Code Top Level Domains and designate a particular country (.au for Australia, .ca for Canada, and so on). A basic Internet address generally looks like this: www.DomainName.gTLD or www.DomainName.gTLD.ccTLD.

There are 21 gTLDs at the moment, but this number is set to blow out under the New gTLD Program. Essentially, from 12 January 2012, any "[e]stablished corporation, organization, or institution in good standing" will be able to apply to ICANN for one or more new gTLDs to be added to the Internet. For example, a company like Canon, which has expressed interest, could hypothetically apply for .canon, .pixma, .camera, .photography, .technology, .smile or all of the above and more. A successful applicant will become the registry operator for the new gTLD, which means (among other things) that it will be able to sell a whole new set of domain names in that gTLD, keep the domain names for itself, or do a mixture of both. However, the price may prove too high for some smaller players: there is an administrative fee of US$185,000 per new gTLD, and that's just the beginning. Applying for a new gTLD could end up costing millions.

(Of course, the above description does not even begin to capture the complexity of the New gTLD Program. The gTLD Applicant Guidebook, which contains all the relevant rules and procedures, tops 350 pages.)

Although there are countless legal issues that arise from expanding the Internet so drastically (cyber-squatters and trade mark infringers will have a field day!), the interesting question for me is how companies will utilise their newly-acquired gTLDs and how we, as frequent users of the Internet, will respond. Cynics and critics claim that we are so wedded to the practice of searching, browsing and navigating the Internet within the .com paradigm (a practice I like to call "dot-comming") that new gTLDs may be fun and exciting initially, but will ultimately fall by the wayside like all other fads and gimmicks. Alternatively, hundreds of new gTLDs will turn cyberspace into a labyrinthine maze of back alleys, side streets and dead ends, making it impossible to locate even the simplest piece of information. Dot-comming, while not necessarily intuitive, is at least familiar.

I tend to agree with those who argue that if companies utilise new gTLDs in innovative ways, our searching, browsing and navigating strategies will adapt accordingly. Imagine how quickly you could check your phone bill if your personalised account page was located at YourName.vodafone. Think how easy it could be to shop online for a second-hand book if you could simply type in books.eBay. Want to rent a DVD, but not sure what's available in your suburb? Go to YourSuburb.Blockbuster. There could be a complete paradigm shift in the way we use the Internet, and companies will be able to reinforce these different ways of thinking through clever and persistent marketing and advertising.

I often have to fight my instinctive wariness of new technologies (back in 2005, I could not see how an iPod could improve my life when it was so easy to play CDs in my car, and a life juggling iPods and click wheels and iTunes just seemed too complicated). But I think that if companies take up the New gTLD Program as forecast, dot-comming could soon be a thing of the past, as obsolete as the floppy disk and the Discman (technologies that were all the rage as recently as 15 years ago).

So will the New gTLD Program actually revolutionise the Internet addressing system, or will it fizzle due to lack of corporate interest? And if it does take off, will the New gTLD Program improve the way we use the Internet, or will it just encourage gTLDs to spread uncontrollably across the online landscape like weeds, leaving a swathe of confusion, counterfeiting and cyber-squatting in its wake?

Only time will tell. If all runs smoothly (and it rarely does in the world of Internet addressing), we could know as early as January 2013, when the first new gTLDs are expected to land.



Friday 18 November 2011

What we lost when we gained the light bulb

Amy Spira

In 2009, I found myself as far away from technology as I had ever been. I got off a rickety, disused school bus and watched it speed away through a cloud of dust, leaving me alone on a dirt road in the parched Nicaraguan countryside. It was the hottest afternoon of my life. After a short hike, I arrived at a small township, where I had arranged to lodge with a local family in order to immerse myself in the life of Nicaraguan subsistence farmers. In the heat of the day, the farmers took refuge in the meagre slivers of shade cast by the midday sun. I settled into the clay-floored hut which I was to share with my host family and then joined the farmers outside.

Within minutes, and despite the heavy heat, I was itching for activity. Something to watch. Or listen to. Some news from the outside world. A conversation, maybe, but I'd need to call someone because the townsfolk were, by that stage, sleeping off their morning's work. I sat in the thick silence. And then I noticed it. A sound unlike any other - the complete absence of white noise.

In this particular town, there was no electricity.

No lights, no televisions, no computers, no nothing.

A few days later, two American travellers arrived, as I had, dusty and tired in the midday sun. One of the first things we discussed was how we could help the town to obtain enough electricity to support at least a single light bulb for each family home. The town was so remote and the infrastructure so poor that connection to the grid was unlikely. So we met with the townspeople to discuss their thoughts on installing solar panels. Our plan was to fundraise in Australia and the United States to fund the installation of panels on each family's land.

What shocked me was the sadness with which many of the townspeople greeted our proposal. Far from being excited about the prospect of electric light, my friend, Alvaro, who was 26 years old, educated and progressive in almost every other way, sighed sadly and said, "I knew this day would come. We can't avoid it forever."

There is nothing surprising about a person who uses a typewriter or who reads by candlelight for the ambience. Similarly, no matter your views on the issues, resisting stem cell research or avoiding modern medicines are actions grounded on identifiable, if controversial, drivers. But what of a person who will not use a telephone? Or a light bulb?

This kind of resistance to technology is often attributed to irrationality, technophobia or a staunch adherence to tradition. Those opposed to industrialisation and new technologies are often compared to the Luddites, who lobbied against the technological advances of the Industrial revolution, often by destroying the machines which they considered to be destructive of social norms. The term Luddite usually carries a negative connotation, implying backwardness or primitivism. Perhaps this is because of the destructive methods the Luddites used when resisting change. Or perhaps it is because, in the industrialised world, technology is so intrinsic to "success" that, by reverse implication, a person who cannot or will not master a new technology is often perceived as incompetent, unambitious, or primitive.

What I failed to see in my enthusiasm for technology was what Alvaro's community stood to lose if it gained a light bulb. Alvaro was not blind to the benefits of electric light, but he saw what was precious in the dark of night. Over the weeks that I spent in the town, I came to see it too – the joy of visiting neighbours' homes when the moon was bright, and the debates that raged in the darkness of the family home on nights when there was only a crescent (or less) in the sky, making it too dark to leave. The town lived by the rhythm of the moon. Alvaro was right to lament the advent of an age in which there was always enough light to go out at night, or to sit alone and read.

A few nights ago, I came home from work and switched on the television. After an hour of mindless watching, I began wondering about the little town in Nicaragua. For all their concerns, the townspeople eventually capitulated to the electrical age and requested that we raise funds to bring them electric light. Solar panels were installed in 2010.

I wanted to ask Alvaro whether he was happy with the outcome – or whether electricity had changed life in the ways that he had feared.

But I may have to wait to find out – the townspeople don’t have telephones. Nor do they want them.

And who could blame them?



Image by IvanClow, made available by Creative Commons licence via Flickr.

Thursday 10 November 2011

Visionary or 'slackademic'? Social media's role in tomorrow's academia

Indigo Willing & Tseen Khoo

As the 21st century unfolds, various types of new media rival, and in some cases surpass, earlier forms of computer-mediated communication (CMC) in the extent to which they impact our lives. Twitter and Facebook have been used most stunningly (and with astounding results) in the realms of politics and social protest movements. This is evident internationally: Icelandic MP Birgitta Jonsdottir has suggested, for example, that Iceland develop a more democratic constitution via the use of Facebook, while social networks played a notable role in the overthrow of Hosni Mubarak in Egypt and the ‘Arab Spring’ protests more broadly. Most recently, we have seen digitally mediated activism in the ‘Occupy Wall Street’ protests, where a tweet in Canada on 13 July 2011 turned into a local protest in Zuccotti Park, New York City on 17 September 2011, before quickly escalating into an ongoing global movement.

In academic fields, however, enthusiasm for social media is not always evident. Just as some disciplines in academia struggled with the idea of harnessing the potential of CMC for their research in the 1990s, many academics currently remain resistant to opportunities to shift or expand their own networking activities over into new media such as Facebook and Twitter. From our experiences with the creation of the Asian Australian Studies Research Network (AASRN), we have found that using new technology – and social media, in particular - creates conflicting rather than united discussions in academia.

Anecdotally, many academics mire themselves in the negative aspects of platforms such as Twitter, or dismiss all social media as activities befitting dilettantes and slackers. This negative orientation harkens back to the traditional denigration of academics who engaged too regularly and enthusiastically with the media. Further, many academics are sceptical of 'slacktivism' or 'clicktivism' (both pejorative terms for the emptiness that can underpin online declarations of commitment to a political, humanitarian or ethical cause).

Having hauled the AASRN network into the Web 2.0 world last year after being based for several years on 'traditional' email, and having embraced social media for several current projects, our perspectives straddle the old-school technology of mailing lists and static bulletin boards and today’s enmeshed social media strategies.

The advent of intensive social media platforms has brought about a significant transformation in the way we run our academic research network. With an active Twitter stream (@aasrn), professional website and Facebook group, we are reaching many more people than ever before. The immediacy and constancy of contact through social media has served the network well, allowing us to cultivate a sense of momentum and breadth of membership.

AASRN has been around (informally) since 2000, as an offline and sometimes online group, and occasional gathering, of academics with shared interests in Asian Australian studies. It was founded to establish and deepen scholarship in the field of Asian Australian studies. Is this aspect supported through the dynamism of the social media forums? Or is new media making our research network connections more shallow (as feared generally about social networks)? Perhaps it’s too early to tell, given our short, only year-long engagement with social media thus far.

The inaugural Asian Australian Film Forum (AAFF 2011), however, is an event that has embraced (and been embraced by) social media, with event momentum and word-of-tweet spurring a full programme of screenings and panels of Asian Australian filmmakers and media types.

That an event about evolving screen cultures should do so well using new media and social media is not all that surprising. Most stories these days are shot on digital video. Gone are the days when budding filmmakers cut their teeth using 8mm or 16mm film, a process that also became increasingly expensive and limited to a privileged few (especially with post-production costs factored in). Even the term ‘film festival’, if not redundant, has a quaint sound to it now.

The Internet plays a vital role in the distribution and promotion of contemporary video productions, fostering the necessary networks to support them. This includes the film press, film festival organisers, film industry bodies, television networks and most importantly, film fans who can (and do) actively communicate with each other through social media.

This heightened accessibility to digital technologies nurtures fresh perspectives and innovative approaches to create and showcase Asian Australian stories. Both Twitter and Facebook have been indispensable to the inaugural AAFF, from sourcing filmmakers to promoting the film programme, to strengthening the engagement of academically-founded entities (such as the AASRN) with Asian Australian creatives and the broader community.

There will always be a “digital divide”, and as Turkle has more recently suggested, there will always be a risk of becoming too introspective due to social networking. For the purposes of the AASRN, however, the horizons of connectivity are impressively vast and, contrary to people becoming more alone together, the web is proving to be a powerful tool for our promotion of collective engagements, on and offline.

Thursday 3 November 2011

Holding a portal to the Cloud

Lester Miller

I've just received my shiny, new, brushed steel and polished glass smartphone. I'll admit now: I'm in love with it. Before this day came, I'd talked frequently about how amazing life would be after it arrived and, now that it's here, I spend a lot of my spare time bathing in its visual modern beauty and trying to fully realise what I'm convinced are life-enriching possibilities.

One of the important features of this smartphone is the loudly-touted easy access to a cloud on which data can be stored and complex calculations can be made.

In 1950, Herb Grosch, a Canadian-born astrophysicist who worked on the Manhattan Project, envisioned a future where the whole world would run using a cloud system of computing, operated by individual local terminals but served by only 15 data centres.

Today, there are about 50,000 data centres just in Australia – certainly more than 15 worldwide – but they are getting larger in size and smaller in number.

Arguably the largest data centre in the world, the Lakeside Technology Center, is 10 hectares (24 acres) in size. It's the nerve centre for Chicago's commodity markets and requires about 100MW of power to operate.

The architecture of computers means they can't presently solve certain kinds of equations, but what they can do is break a model down into millions of tiny parts and approximate the solution to governing algorithms by iterative methods. The smaller the parts the problem is broken into, the greater the precision and accuracy of the solution, but the more calculations required.

My final year project for my Engineering degree, an aeon (or ten years) ago, was to design an efficient shape for a solar-powered car, hypothetically to be built and raced in the World Solar Challenge. It involved modelling the movement of air across the surface of the car, a problem governed by partial differential equations, unsolvable directly but susceptible to a good approximation. My team would go into the computer rooms on campus, enter the surface geometry of a car we thought would slip through the air cleanly, and then leave the post-processor to think about the problem, which would take about a day. The graphic visualisation part of the problem would also take hours. We would return, review the results, think about how shape could be improved and do it again.

How much quicker would the process have been an aeon later? With the accessibility of clouds now, computational fluid dynamics packages and so many other data analysis packages for professionals from structure designers and advertisers to baseball scouts can be operated remotely, by our smartphones or tablets, which need only have the power to display the interface between the calculator and the user: a "dumb" terminal.

Computing is rapidly becoming a service business. Want to store your precious data? Don't keep it where moth and rust destroy.  Leave it all with us for a monthly fee, or for free if you promise to notice our constant but subtly-placed advertising banners.

Need a complex problem solved? It was not unusual, until recently, for a seat with a data consultant to be up to tens of thousands of dollars. The barriers to entry are now lower for modelling and data manipulation consultants, such that all it takes is a short lease contract for software and the computer power on which to run it (and soon, your sexy smartphone).

The challenge for data centres is business continuity delivered in an efficient way. The global ICT industry was estimated in 2007 to be producing 2% of the world's carbon emissions and data centres 14% of that, the latter of which appears to be growing. Google keeps the server hallways in its centre at 27 degrees celsius to reduce airconditioning loads. Other centres are being built near proposed tidal power generation sites, such as one near the Pentland Firth in Scotland.

The technology can be used across the entire spectrum from trivial to world-changing problems. There are, for example, teams of people involved in the search for intelligent life beyond Earth. The SETI@home project used hundreds of thousands of idle home computers to review reams of data from a radio telescope array for faraway signals that couldn't be dismissed as noise. Last year, Amazon donated a part of their cloud so that SETI could continue their efforts with even greater power for the next six years.

It's frustrating but also amazing that the problems we want to solve seem to become more complex the more we learn. The cloud will no doubt become the way that we will relate to and get closer to solutions to the most tricky and long-standing unknowns.




Image by Karin Dalziel, made available by Creative Commons licence via Flickr.