Foto

It could have been patented

Vilnis Vējš

A conversation with Gints Gabrāns, artist and nominee for the 2019 Purvītis Prize 

13/03/2019

Although Dubai, Shanghai and New York do not lack for skyscrapers, some do stand out from the rest. These office buildings don’t start at the foundation, but somewhere up in the stratosphere – and instead of up, they stretch downwards, almost to the ground. They are part of the augmented reality (AR) project created by the artist Gints Gabrāns, and which can only be seen on the screen of a smart device on which the SAN mobile app has been downloaded. Gabrāns has been working on this long-term project for more than three years now, assuring its constant evolution by revealing new AR options with every subsequent presentation. It is typical of Gabrāns to spend an extended period of time developing a theme that is linked to the relationships between science, technology, and daily reality, and then to swiftly be lured in by the temptation of new discoveries; unsurprisingly, he’s been shortlisted for the Purvītis Prize twice before – in 2009 and 2015. The last time, he had just begun his project Out of Nowhere – plastic sculptures made by pouring liquefied polyethylene into cold water. The project culminated a year later, in a solo show in Klaipėda, Lithuania, where, after having undergone computed tomography scans, the sculptures could be viewed with 3D glasses as video projections. These were the first of Gabrāns’ sculptural models to move from the real environment to the virtual. Gabrāns has represented Latvia at the 52nd Venice Biennale (2007), and has participated at the art biennials of Rauma (2006), São Paulo (2004), Moscow (2011), and Stavanger (2016). Gints Gabrāns has been nominated for the 2019 Purvītis Prize for the public art project Transreality Zirgu Street (November 16-19, 2018) and its sequel, SAN ofisu debesskrāpji / SAN Office Skyscrapers (from December 7, 2018).


When did your art move from real space to virtual space? You were shortlisted for the Purvītis Prize in 2015 for the project Out of Nowhere, in which you made irregularly formed sculptures by pouring melted plastic into water, but later on, these same sculptures could be viewed as part of the SAN project.

There’s a different aspect to the plastic sculptures which were transferred into AR; I didn’t create SAN because of them. That’s the way the space generated by SAN expressed itself in the beginning. The plastic sculptures had been imaged with x-rays, and I had a 3D model of each sculpture – from both the inside and outside. It was a very receptive material for transferring to the virtual environment because 3D models are precisely what SAN uses. It also resounded well with the plastic sculptures that had developed through evolution [Gabrāns remelted and poured anew a portion of the sculptures, keeping only the best ones – V.V.]. Through natural – or artificial? – selection, the most increasingly complex structures were kept. That’s the point where, obviously, the issue of distribution appeared. By way of the SAN space, in one broad stroke, the plastic sculptures came into all of the leading art museums – beginning with New York (the Guggenheim Foundation and MoMA), then the Centre Pompidou in Paris, and onward throughout the world.


How did you come up with the idea that your new art project was going to be an application? Were you already using a lot of other apps at the time?

No, it was the exact opposite. I looked at the already existing AR apps as possibilities, but I concluded that none of them had what I needed. I didn’t want to make a project based on an already existing app, so a new algorithm had to be made from the bottom up. It differed from the previous ones which, up to then, had been tied to optical markers or planes that served as reference points for the placing of AR objects into the virtual space. SAN distinguishes itself because I felt the need to to work with large spatial areas – actually, the whole planet – by using Global Positioning System (GPS) coordinates that could be read by Google Maps. I wanted the objects to be truly three-dimensional, of the kind which you could move through and walk around, and which didn't need a visual marker. Using the existing applications, if I had wished to place an object in, for instance, the Tate Modern, then it would have to be done in a subversive way, or, I’d need to officially arrange for the placing of a physical marker that the application could detect. With SAN, structures that I’ve created can exist in generated reality completely independently of the existing space. I was specifically interested in large scales.


Can you place the object so precisely that it can be viewed exactly in the Tate Modern’s Turbine Hall – rather than a bit to the right or the left, or in the neighbouring exhibition halls, or on the wrong floor, for instance?

Yes. There is no precise height to it, however, because the application is based on GPS coordinates. A set height could, theoretically, also be included, but at the moment, if the viewer is standing on the roof of the building, the whole large-scale sculpture also rises to that height.

How did you go about executing this idea? Did you look for collaborative partners?

I turned to the Overly agency, which at the time was a startup but, as I understand it, is now a company working precisely in AR. [Overly is the first and only company in the Baltic states that focuses on developing AR technology and creating AR solutions – V.V.] They have their own app. Perhaps their programmers developed SAN because it seemed like an interesting thing to do. They had to write a new algorithm – all of the principles that the space would adhere to had to be conceived. The algorithm is grounded on GPS coordinates, a gyroscope, and a compass – this creates the conditions for the objects to simply hang in the air. There weren’t any other apps like that at the time, which we can be sure of because today’s programmers use script libraries and largely work off of something that already exists. In the end, we concluded that we have to write a completely new algorithm ourselves. During at least its first and second years, it was something that could have been patented. Software isn’t patented, but the method was patentable. In this case, we had created a different method by which AR objects are created. The patenting process wasn’t carried out to the end, but in its initial stage, the method was deemed patentable.


Does the method have the potential to be used in other ways, i.e. not just for art but for practical or scientific purposes? I ask because, in your work, you have often come up with utopian – perhaps even realistic – proposals that straddle the borders of art and science. For example, in the project FOOD, you offer the world a way to solve its food shortage problem by synthesizing an enzyme that allows people to consume cellulose.

When we started creating the app, there were attempts to write similar scripts for urban infrastructure, such as for the underground cables. But there’s the small issue that it’s not practical to use GPS coordinates for such precise things. There is no data available that would be accurate up to the centimetre.

The AR project questions whether an artist even needs exhibition halls, art institutions, and so on.

I also had a work called Since I no longer participate in exhibitions and do not belong to so-called contemporary art (2018). Yes, at that time, the inflation of contemporary art institutions was noticeable. At the moment, everything is moving forward with a kind of cultural inertia, but there is no longer anything of worth to offer artists so that they would be willing to sacrifice for the benefit of these institutions. The original purpose of SAN was the creation of a new space so that I could be completely independent in my art.


The app has been around for a number of years now, and it has definitely evolved.

The main evolutionary aspect is… In truth, the basic algorithm has not changed. But, with the application having been running for three years, all the while creating an illusion in order to make your senses accept these augmentations in reality, the app works much better for me now. However, the real reason behind this is that, after having looked at it for three years, I am now able to accept virtual objects as having gained their place in the real space. I already feel their dimensionality. I assume that people who look at reality through SAN for the first time are at the stage where I was at the very beginning. A great deal of evolution has happened, but in the human mind.


Does this mean that you are currently creating objects in AR using another approach?

Yes. Despite the fact that [virtual] objects can be placed into real space, with their perspective correctly changing when we move, they are still alien to our sense of perception. Problems arise in many places. By placing [virtual] sculptural objects in AR as we did in the beginning, all it takes is for a power line to get in the way for our perception to feel that something’s not right. That’s because for a [virtual] object that is perhaps a kilometre high in the sky, its digital layer covers up details that are closer in real life, i.e. are within that one kilometre between the viewer and the object. The [virtual] object appears to be on this side of the power line. Although it theoretically would be a good thing if we could see objects through walls, the human brain says – no, that is in no way an object that is kilometres away! Over these three years, I have understood how to make things that people recognise and accept by placing them in the spatial model that exists in our brains. There have been experiments in which when a spatial grid has been placed over a simple object, even when going into a forest, the grid will be visible through all the trees. Even without an app, we perceive the real world with ready-made models. In order to accept the augmented reality generated by SAN, one’s perception must also learn to read completely new mental maps – which is not something that everyone can or wants to do. Everybody perceives spatiality differently.


Art professionals have been aware of SAN for quite a while now, but you were nominated for what you yourself call your solo show Transreality Zirgu Street, which took place in the autumn of 2018 during the Staro Riga festival. Your AR was presented to viewers on a very broad scale.

Another important event that took place was that all of the AR works that were created previously – before Transreality – were erased: they were sent to their ‘digital death’. They are now completely inaccessible. I felt elated after that. This is directly related to evolution because, before, I didn’t really know what to do with the burden of digital immortality. Many times when working in collaboration with various festivals, I would be asked: ‘How long will they be on for?’, and I had to answer: ‘Forever!’ Of course, digital eternity is an extremely dubious thing. But, on the other hand, it was already a disruption because, for example, Riga was full of areas where every work was dealing with having reached the maximum load. Riga was so loaded down that nothing new could be placed in it anymore. Then a new update of the program came, and we had to decide what to do with the old objects. I chose digital death, so that further development could continue with a breath of fresh air.


Your work Ms BoCA blanket (done in collaboration with the GolfClayderman group) was also on view for visitors of the first Riga International Biennial of Contemporary Art (RIBOCA 2018), even though it wasn’t part of the official programme. You, however, do not associate it as having been a guerrilla art tactic.

My works are not subversive because I use reality as a resource – as good material for my project. It was not intended as criticism. RIBOCA was a fun event with with ‘a good texture’. The entropy of contemporary art institutions is a favourite subject of mine. Both the guests at the opening and the whole process were ‘elements’. With AR, I covered all of the venues of the biennial with a moving blanket. The subsequent deletion of all previous digital files was also partly due to the RIBOCA blanket, as it was dozens of square kilometres large. When I switched on SAN, I no longer knew myself what would be appearing over my head.

What was the objective that you had set out to do with Transreality Zirgu Street?

It was the first work to come after ‘the great file extinction’. There were a lot of new opportunities that only I could really understand – depth masks, portals, etc. The transreality towers that first appeared in Zirgu Street began to spread. Now these SAN office buildings can be seen in New York and elsewhere.


You said that especially during this last event, the number of SAN downloads had rapidly increased.

Times change. Previously, even those who liked the project were reluctant to download the app; they preferred to look over my shoulder and view it on my own tablet. Now people are more accustomed to it. Eleven and a half thousand downloads in four days – that's a huge number. I noticed that even older people were trying to download the app. I didn't get to see if it worked for them, though.


Will AR become daily life for us?

I don’t place ordinary and augmented reality up against each other. Virtual reality has overlapped with completely everyday situations. People look at Google Maps and simply go where they have to go. This space has a tendency to create itself and set its own rules.

Related articles