Peter Sunde Kolmisoppi
The first speaker on day one was Peter Sunde, founder of Pirate Bay and Flattr. He gave a very interesting overview of his ‘practice’ to date, beginning with the Pirate Bay beginnings and some of how the project’s strategy was to simply take on whatever label was thrown at them and run with it. When described as terrorists, they spoofed the Pirate Bay’s IPs so it appeared to be hosted in North Korea. When they were described as a cult, they registered themselves as a religion (Kopimism), which is surprisingly cheap and easy to do in Sweden, and also provides legal protection from state surveillance. The Pirate Bay’s operations are very well documented so I won’t linger on them further here.
His most useful observations were about the decentralised internet technology that increasingly carries content whose ownership is centralised and privatised by operators such as Facebook, Twitter, Google etc. This becomes a problem when consider that 3D printing is a technology that might one day be deployed to deliver food: do we want our food supplies to be owned and managed by Googles and Facebooks?
Matt Fuller chaired the Q&A session. Exploring the ‘believable but insane’ strategies of the Pirate Bay, he asked about their philosophical direction. Sunde described the internet as something similar to a prison or, more accurately, an airport: if you want to access it you have to give up significant freedoms, enter into a “rigidly controlled security environment”. In fact, Sunde said that he felt “less stressed in prison than outside”.
When asked what to do to make the internet good again, he said that “we lost that case”. The battle for the centralisation of the internet has been lost – his attitude now is that the internet is fucked, but not so fucked that this is visible to the average user; we need to let it get more fucked so normal users realise, and that when it finally breaks down we can build it again better next time, not making the same mistakes.
Bitnik showed three bodies of work for discussion. The first was a piece in which they installed phone ‘spies’ in the Zurich opera house, which randomly phoned punters and offered them a live link up to the opera taking place. It’s based on an older model of technology, something that was proposed in the early days of the telephone. In the documentation they showed, the audience assumed the work was political activism: this also came up in the Arts Technologica interview with James Bridle, that audiences no longer can identify art, and assume that it becomes activism when it talks about political things.
The second piece was called Delivery for Mr Assange, and consisted of a Tim Knowles-style parcel that took and uploaded photos of its progress from a Hackney post office to the Ecuadorian embassy. Their explanation of the project seemed to go on forever, but to cut a long story short, Assange eventually received the parcel and he used it to raise awareness of other political prisoners. Part of me felt sad that this piece was talked up so much: it seemed like it only actually happened because it received media coverage. If it had remained secret, and not been picked up by the BBC, it would probably have been filtered out at a security checkpoint. For me, this indicated unresolved problems with how observation affects the thing being observed, old-school anthropology in a digital context. Additionally, the staging of the documentation of the piece in a Zurich gallery made me feel a bit squeamish: put together as a neat, clean-looking gallery show, using familiar tropes that saleable gallery art often deploys. Karen Archey nailed it later by describing it as presenting things in traditional ways because of no reason other than tradition.
The third piece was Random Darknet Shopper. This piece was a bot that randomly purchased an item from the darknet using 100 bitcoins every week, the objects to be delivered and displayed in a gallery. The gallery-based display strategy made more sense here, but there was also a bit of ‘filler’ in the way that the object’s progress was made evident in the gallery with screens outlining its dispatch, progress and so on. The question raised here, when the bot purchased some ecstasy pills, was who takes legal responsibility for the actions of an algorithmic system, be they ecstasy buying bots, or self-driving cars? The police confiscated the ecstasy but let them keep the rest of the items.
The Q&A was chaired by Matt Fuller again, and the first question was whether they feel the artworld restricts their practice. This is a complex question, as it touches on the potential extra-territoriality of art but also its hierarchies: while the darknet shopper exists outside legal structures, establishing that requires legal assistance that might only be available with support from a major gallery. Would a citizen deploying the same method receive such lenient treatment, even if they declared it as art?
Their response was really to do with issues raised earlier: when producing art under conditions of mass surveillance, do you self-censor? That’s what mass surveillance is compelling you to do, to be your own policeman.
When asked about the symptoms of a broken internet, they gave an interesting anecdote about how when working with galleries in China, people used online identity in much more flexible and mutable ways, popping up and disappearing again to retain a sense of anonymity: the opposite of Facebook’s own-name policy.
Matt Fuller made some interesting theoretical points about how disintermediation is one of the main characteristics of our current internet predicament. There are two systems: one is a Turing machine, a machine that handles symbols and moves them around; the other is capital, a system of general equivalence. He talked about “mixing the systems of symbolic transduction”. I wish had made better notes because it sounded very convincing the way he told it. In his view, it’s hard to know the consequences of the linking up you’re doing within those two systems.
(Timothy Morton: Hyperobjects)