Keynote Speakers and Plenary Sessions

KEYNOTE SPEAKERS
 

Introductory Keynote

 

Héctor Rodríguez

Dr. Héctor Rodríguez

Héctor Rodríguez is a Hong Kong-based digital artist and theorist whose work explores the unique possibilities of computational technologies to reconfigure the history and aesthetics of cinematic art. He received the Best Digital Work award in the Hong Kong Art Biennial 2003, an Achievement Award at the Hong Kong Contemporary Art Awards 2012, and was included in the Jury Selection of the Japan Media Art Festival 2012. He was named Media Artist of the Year in 2019 by Hong Kong's Art Development Council. He was the Artistic Director of the Microwave International Media Art Festival in 2004-2006, and is Director for Research and Education for the Writing Machine Collective. He was also the founder of Hong Kong's first undergraduate major in Art and Science. 

The title of his talk is "Crisis, critique and technological understanding."

Abstract: A foundational theme in contemporary critical theory is the close relation between critique and crisis. This talk describes the introduction of computational technologies, and more specifically machine learning algorithms, into artistic practice as a historical crisis, which fundamentally puts into question the relation between making and understanding. From the standpoint of the artist, the question concerns the extent to which she understands the technologies that she is using. The artist often employs technical means whose internal mechanisms are obscure to her. But in art means and ends are essentially intertwined. Any opacity in the means extends to the ends for which they are used, and so potentially threatens the integrity of artistic agency. Similar concerns about the opacity of technology have been raised, outside the artworld, in the scientific community itself, giving rise to discussions about how to render machine learning interpretable or explainable. The prevalence of these discourses suggests that the obscurity in question is not a matter of individual ignorance. It pertains to the historical constitution of the technologies themselves in their essential character as formal systems, and to their formative role in what has been described as a society of hyper-control.

 

 

Refik Anadol

Mr. Refik Anadol

Refik Anadol (b. 1985, Istanbul, Turkey) is a media artist, director, and pioneer in the aesthetics of machine intelligence. His body of work locates creativity at the intersection of humans and machines. In taking the data that flows around us as his primary material and the neural network of a computerized mind as his collaborator, Anadol paints with a thinking brush, offering us radical visualizations of our digitized memories and expanding the possibilities of architecture, narrative, and the body in motion. Anadol’s site-specific parametric data sculptures, live audio/visual performances, and immersive installations take many forms, while encouraging us to rethink our engagement with the physical world, its temporal and spatial dimensions, and the creative potential of machines.

The title of his talk is "Space In the Mind of A Machine."

Abstract: In taking the data that flows around us as his primary material and the neural network of a computerized mind as his collaborator, Refik paints with a thinking brush, offering a radical visualizations of our digitized memories and expanding the possibilities of architecture, narrative, and the body in motion. In this talk, he shares his studio's recent site-specific parametric data sculptures, live audio/visual performances, and immersive installations which take many forms, while offering a dramatic rethinking of the physical world, our relationship to time and space, and the creative potential of machines.

 

Joint Keynote

 

Adrian Mackenzie

Prof. Adrian Mackenzie

Adrian Mackenzie (Professor in the School of Sociology, ANU) researches how people work and live with media, devices and infrastructures. He often focuses on software and platforms. He has done fieldwork with software developers in making sense of how platforms are made, managed and maintained (see *Cutting Code: Software and Sociality,* Peter Lang 2006). He has tracked infrastructural experience (_Wirelessness: Radical Empiricism in Network Cultures_, MIT Press 2010). A recent book _Machine Learners: Archaeology of a Data Practice_ (MIT Press, 2017) describes changes in how science and commerce use data to make knowledge. He has a keen interest in the methodological challenges of media and data platforms for sociology and philosophy. He led work on Society and Data at the Data Science Institute, Lancaster University (2015-2018), co-directed the Centre for Science Studies, Lancaster University (2014-2017), and currently is co-investigator on the Australian Research Councile-funded 'Re-imaging the empirical' (2018-2020).

 

Anna Munster

Prof. Anna Munster

Anna Munster is an artist, writer, and professor in Art and Design, and Deputy Director of the National Institute for Experimental Arts, UNSW. She is the author of An Aesthesia of Networks (MIT Press 2013) and Materializing New Media (Dartmouth University Press, 2006). Both of these examine artists’ engagements with networks and digital culture and contribute a dynamic conception of digital materiality to digital arts and media studies. Anna newest major publication is the co-edited anthology, Immediation, I and II with Erin Manning and Bodil Marie Stavning Thomsen (Open Humanities Press, 2019).

Her current research examines the relations between machine learning and visuality, with a special emphasis on how AI assemblages require and generate operative images. This research is collaborative, undertaken with the science and technology studies scholar Adrian Mackenzie and postdoctoral artist, Kynan Tan. Anna is also an artist, regularly collaborating with Michele Barker. Their most recent commission was pull (2017), for Experimenta Make Sense: International Triennial of Media Art. Their works use moving image, soundscapes, interaction and installation design to explore human and nonhuman movement and. perception. They are currently working with drones to critique and redeploy drone cinematography.

The title of their talk is "Oscilloscopes, Slide Rules and Nematodes: Perceptions of the ImageNet Observer."

Abstract: In 2019, an artist and an academic released an app that attached “labels” to “images of peoples” faces across social media platforms, where profile images proliferate. The app, ImageNet Roulette, which used ImagNet’s “person” classes and associated images to train on, had a brief viral uptake. As Kate Crawford and Trevor Paglen, who designed the app as a research tool, noted, the bizarre labelling that the seemingly “neutral” tasks of object detection perform reflect the wider social and political problems that accrue to the misrecognition that classification of images in AI produces. In this paper, we likewise take up the image orderings performed by ImageNet but from a different angle. When machine-learning and neural-based AI models “observe” the world they fundamentally, if arbitrarily, name it as a world of things in such a way that objectifying misrecognition becomes normal. But what happens if we conceive the situation not as one in which perception is radically erroneous, because it is objectively wrong, but rather that “perception” is transformed because it is affected by many other images?
 
In our marginal experiments, carried out upon a database fashioned from arXiv scientific papers—which include research into machine learning, computer vision and AI—the assembly of 20,000 object-based categories deployed by ImageNet no longer steadies experience. Instead, the experiments delegate to the objects the power to state something about how we know them. When, for example, we ran a standard deep learning classifier, pretrained on ImageNet, on the many scientific figures of graphs and diagrams in arXiv, where we saw a graph, it observed “oscilloscope,” and where we saw a flow chart, it named a “slide rule” or even “nematode.” Such egregious “mistakes” prepare us for statements not of our own making but both about and beyond our own making. A making of experience that cannot simply be fitted to arbitrary names and objects but cuts across such nominalism transversally. 

We ask what is empirically playing out in the observational processes of deep learning architectures that cannot be accounted for by either its apparent nominalism or its claims to objective realism? How might we value the forms of “perception”—observation, classification, detection, recognition—performed by AI as entangled with yet differentially propagating from human ones?  Experience, as William James suggests, passes along paths of perception that shade off in gradients  of anticipation, and intermediate shoals of memory and habit. It is at once ongoing, diverging, accumulating, partially organized but always incomplete. We suggest that a radical empiricist approach to machine learning, drawing on James, might be useful in getting us beyond critiquing the (human) epistemological biases of AI, and some way toward an understanding of the relationality of its modes of “perception.” 

 

 

PLENARY SPEAKERS
 

Ms. Tega Brain

Ms. Tega Brain

Tega Brain is an Australian-born artist whose work examines issues of environmental engineering, data systems and infrastructure. She has created wireless networks that respond to natural phenomena, systems for obfuscating fitness data, and an online smell-based dating service. Her work has been shown in the Vienna Biennale for Change, the Guangzhou Triennial, and in institutions like the Haus der Kulturen der Welt and the New Museum, among others. Tega is an Assistant Professor of Integrated Digital Media, New York University.

The title of her talk is "Towards a Natural Intelligence."

Abstract: What kinds of intelligences should automate decisions in our technological and infrastructural systems? How should intelligence be defined and recognized? The Solar Protocol web platform relies on an intelligence that emerges from earthly dynamics: specifically that of the sun’s interaction with the Earth. It is an experimental network of solar powered servers that directs internet traffic to wherever the sun is shining. Our lives have always been directed by a range of natural logics that emerge from the intermittent dynamics of our shared environment. Weather, seasons, tides and atmospheric conditions all dictate our behavior, enabling and constraining our movements, production and cultures. Solar Protocol uses these logics to automate decisions about how the network operates and what content is shown at different times of the day. How can we learn or relearn to design with natural intelligence? Solar Protocol is a collaboration between Tega Brain, Alex Nathanson, Benedetta Piantella and a group of volunteers who are stewarding the project’s servers around the world.

 

Dr. Rosa Chan

Dr. Rosa H. M. Chan

Rosa is currently an Associate Professor in the Department of Electrical Engineering at City University of Hong Kong. Her research interests include computational neuroscience, neural prosthesis, and brain-computer interface applications. She was the co-recipient of the Outstanding Paper Award of the IEEE Transactions on Neural Systems and Rehabilitation Engineering in 2013, for their research breakthroughs in mathematical modelling of the hippocampus for the development of cognitive prosthesis. Dr. Chan was the Chair of the Hong Kong-Macau Joint Chapter of IEEE Engineering in Medicine and Biology Society (EMBS) in 2014. She is elected to the IEEE EMBS Administrative Committee (AdCom) as Asia Pacific Representative (2018-2023), and she is currently an editorial board member of the Journal of Neural Engineering.

The title of her talk is "Eavesdropping on Human Behaviors: Lessons from Engineers."

Abstract: To build better machines, engineers have over a thousand years of experience studying users. Observable user patterns, particularly movements related, have been utilized in establishing design requirements. Movements reflect the user preference/choice/decision, which could be manifested in various forms ranging from facial expression, gait and posture, hand gesture, location, keypress, to voice. With the advancement in science, we could better understand the basis of these muscle-driven movements and even alternative measures to study behaviors other than movements which are made possible by better sensor technologies. For example, wearables with surface electrodes allow us to measure electrical signals related to information transmission in the nervous system. Together with exponentially improving computational capacity, algorithms are now capable of finding patterns with minimal prior assumptions and without hand-crafting individual features from data. This talk will review tools that engineers are using to better understand human behaviors.

 

Ms. Stephanie DinkinsPhoto by Jay Adams

Ms. Stephanie Dinkins

Stephanie Dinkins is a transmedia artist who creates platforms for dialog about artificial intelligence (AI) as it intersects race, gender, aging, and our future histories. Dinkins’ art practice employs emerging technologies, documentary practice, and community engagement to confront questions of systemic injustice, social equity, and data sovereignty.  She is particularly driven to work with communities of color to co-create more equitable, values grounded artificial intelligent ecosystems.
 
Dinkins is an Associate Professor at Stony Brook University where she holds the Kusama Endowed Chair in Art. Dinkins exhibits and publicly advocates for equitable AI internationally. Her work has been generously supported by fellowships, grants, and residencies from Stanford Institute for Human-Centered AI, Creative Capital, Sundance New Frontiers Story Lab, Eyebeam, Data & Society, Pioneer Works, NEW INC, and The Laundromat Project.

The title of her talk is "Archival Loops."

Abstract: An archive is a system or collection of historical documents or records that provide information about places, institutions, individuals or groups of people.

A feedback loop is a part of a system that some piece of that system’s output is used as an input to organize future outcomes.

Archives and feedback loops are part in parcel of the binary calculations and algorithms the systems in which they exist are founded on. 

The algorithms that run the structures we depend on are complex, inscrutable and entangled in every facet of our lives. With each encounter, we empower these systems with the trail of information we leave behind. This data is often used to watch, assess and control us as people. Our needs, hopes, dreams and desires are calculated to serve the status quo. Because binary calculations are inadequate to assess us, Archival Loops asks: How can we create less reductive systems that encourage generous, nurturing and nuanced understandings of our lives?

 

Dr Rebecca Fiebrink

Dr. Rebecca Fiebrink 

Dr. Rebecca Fiebrink makes new accessible and creative technologies. As a Reader at the Creative Computing Institute at University of the Arts London, her teaching and research focus largely on how machine learning and artificial intelligence can change human creative practices. Fiebrink is the developer of the Wekinator creative machine learning software, which is used around the world by musicians, artists, game designers, and educators. She is the creator of the world’s first online class about machine learning for music and art. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes, and she is currently working on projects related to creating new accessible technologies with people with disabilities, and designing inclusive machine learning curricula and tools. Dr. Fiebrink previously taught at Goldsmiths University of London and Princeton University, and she has worked with companies including Microsoft, Smule, and Imagine Research. She holds a PhD in Computer Science from Princeton University.

The title of her talk is "Machine Learning as Creative Design Tool."

Abstract: Recently, there has been an explosion of interest in machine learning algorithms capable of creating new images, sound, and other media content. Computers can now produce content that we might reasonably call novel, sophisticated, and even compelling. When researchers, artists, and the general public discuss the future of machine learning in art, the focus is usually on a few basic questions: How can we make content generation algorithms even better and faster? Will they put human creators out of a job? Are they really making ‘art’?  

In this talk, I propose that we should be asking a different set of questions, beginning with the question of how we can use machine learning to better support fundamentally human creative activities. I’ll show examples of how prioritising human creators—professionals, amateurs, and students—can lead to a new understanding of what machine learning is good for, and who can benefit from it. For instance, machine learning can aid human creators engaged in rapid prototyping of new interactions with sound and media. Machine learning can support greater embodied engagement in design, and it can enable more people to participate in the creation and customisation of new technologies. Furthermore, machine learning is leading to new types of human creative practices with computationally-infused mediums, in which a broad range of people can act not only as designers and implementors, but also as explorers, curators, and co-creators.

 

Mr David Ha

Mr. David Ha 

David is a Research Scientist at Google Brain in Tokyo, Japan. His research interests include Neural Networks, Creative AI, and Evolutionary Computing. Prior to joining Google, he worked at Goldman Sachs as a Managing Director, where he ran the fixed-income trading business in Japan. He obtained undergraduate and graduate degrees in Engineering Science and Applied Mathematics from the University of Toronto.

The title of his talk is "Creativity in machine learning research."

Abstract: In this talk I will discuss some of my personal experience with getting neural networks to do interesting things as part of my life as a researcher. For example, I will show how we can get untrained neural networks to generate high resolution computer art. I'll also discuss experiments that involve collaborative sketching with artificial agents, and how such tools can also make their way into analyzing Japanese literature and writing systems. Finally, I'll talk about some works about getting artificial agents to play video games by "dreaming". At the end of the talk, the audience can have a feel of how machine learning systems can be used, and have a sense of their capabilities and also their limitations.

 

Mr. Adam Harvey

Mr. Adam Harvey

Adam Harvey (US/DE) is a researcher and artist based in Berlin focused on computer vision, privacy, and surveillance. He is the creator of the Exposing.ai face dataset search engine, CV Dazzle, the Anti-Drone Burqa (camouflage from thermal cameras), and most recently launched VFRAME.io; a computer vision system for investigative journalism.

Harvey's research has been profiled in media publications including the New York Times, Wall Street Journal, Nature, New Yorker, and the Financial Times; and shown at internationally acclaimed institutions including the Frankfurter Kunstverien (DE), Zeppelin Museum (DE), Utah Museum of Contemporary Art (US), Kemper Museum of Contemporary Art (US), and Walker Art Center (US). His most recent research involves the use of 3D rendering and 3D printing techniques to address the challenges of image training data for the development of specialized computer vision algorithms.

The title of his talk is "New Optical Regimes."

Abstract: Harvey will discuss his past work on developing countersurveillance technologies and why artificial intelligence changes the dynamics of how artist should think about creating work. The role that artists have played in creating data is largely hidden, and the role that artists could play in the future is still largely unexplored. Harvey will present his recent research on two opposing sides of this topic: exposing.ai, a project about the origins of datasets and vframe.io, a project about creating new ways of seeing with data.

 

Mr. Lam Yun Wah

Dr. Yun Wah Lam

Dr. Yun Wah Lam is a biochemist and cell biologist. He was a postdoctoral researcher in the Wellcome Trust Biocentre in Dundee, Scotland. He is now an associate professor at Department of Chemistry, City University of Hong Kong, where he built a multi-disciplinary research network to tackle a myriad of biological problems, from environmental sciences to regenerative medicine. He has published over 100 scientific papers and patents, and is currently the leader of the “Global Research Enrichment And Technopreneurship (GREAT)” programme at CityU. He was a scientist-in-resident at SymbioticA (Perth, Australia) in 2019 and the recipient of the CityU innovative e-learning award in 2020. He is a co-organiser of Leonardo Art and Science Evening Rendezvous (LASER) and Café Scientifique in Hong Kong. He collaborated with Maro Pebo on the artwork “Microbial Emancipation” (2020), and is the scientific advisor to a number of artworks, including “Magic Wands, Batons and DNA Splicers” by Wong Kit Yi (2018) and “CRISPR Seed Resurrection” by Ken Rinaldo (2021). 

The title of his talk is "Life is a mess: towards a gene-eccentric and post-teleological discourse in bio-art."

Abstract: Biological knowledge accumulated in the past 150 years, especially the recent explosion of genomic data, has demystified life to such an extent that some thinkers are convinced “life is an algorithm”. Although the use of anthropometric and engineering metaphors in biology has been an old fixation, “life as algorithm” is quickly becoming a mainstream, not only in art and philosophies but also in public imagination. This view engenders the idea that an organism can be reprogrammed with the editing of one or few genes, just like a programme can be debugged by changing a few lines of code. I argue that the current understanding of biology is still so primitive that this algorithmic narrative is premature, reductionist if not delusional. Human-built machines are designed around explicit functions, and progress is defined by the refinements towards a purpose. Evolution, however, is aimless, driven by the provision of randomness to deal with unpredictable challenges in future. Some of the genetic variants, after being amplified to meet these challenges, remain as remnants in the genome long after these challenges disappear. As a result, an organism’s genome is full of redundancy and unnecessary complexity, relics of long forgotten evolutionary dramas. Studying biology in purely algorithmic terms is therefore dangerously human-centric. Instead of thinking “organisms as algorithms”, it is probably more pertinent to imagine the genome of an organism as fragments of collective memories collected throughout its evolutionary past and of natural history of the planet. Using examples from recent research, this lecture underscores the messy reality of life: the illogical, unpredictable, uncontainable, and ingenious. It advocates for a creative perspective in both science and art, grounded in this unsolvable messiness of biological matters, beyond the gene-centric, mechanistic, and teleological view of life.

 

Dr Janelle Shane

Dr. Janelle Shane

Janelle Shane's AI humor blog, AIweirdness.com, looks at the strange side of artificial intelligence. She has been featured on the main TED stage, in the New York Times, The Atlantic, WIRED, Popular Science, All Things Considered, Science Friday, and Marketplace. Her book, "You Look Like a Thing and I Love You: How AI Works, Thinks, and Why It’s Making the World a Weirder Place" uses cartoons and humorous pop-culture experiments to look inside the minds of the algorithms that run our world, making artificial intelligence and machine learning both accessible and entertaining. Shane is also a research scientist at an optics R&D company, where she has worked on projects including a holographic laser tweezers module for the space station, and a virtual reality arena for mantis shrimp.

The title of her talk is "AI Just Wants to be Average."

Abstract: As today's machine learning algorithms get better at imitating human text and images, they also get better at being boring. How do you produce art if your tool is optimized to copy cliches? At Aiweirdness.com, Dr. Shane specializes in drawing the unusual out of utilitarian algorithms.

 

Jenna Sutela, photo by Ellie Lizbeth BrownPhoto by Ellie Lizbeth Brown

Ms. Jenna Sutela

Jenna Sutela works with words, sounds, and other living media, such as Bacillus subtilis nattō bacteria and the “many-headed” slime mold Physarum polycephalum. Her audiovisual pieces, sculptures, and performances seek to identify and react to precarious social and material moments, often in relation to technology. Sutela's work has been presented at museums and art contexts internationally, including Guggenheim Bilbao, Moderna Museet, and Serpentine Galleries. She is a Visiting Artist at The MIT Center for Art, Science & Technology (CAST) in 2019-20.

The title of her talk is "Many-Headedness."

Abstract: I often work with words, sounds, and other living materials, such as the single-celled yet “many-headed” species of slime mold, Physarum polycephalum; a symbiotic colony of bacteria and yeast in a kombucha tea ferment; and the bacterium Bacillus subtilis nattō. I have also collaborated with artificial neural networks. In the artist talk, I will share some of my ongoing artistic research on biological and computational systems.

A lot of my recent work looks at, or looks for the ghosts in the intelligent machines that are increasingly shaping our reality. On the one hand, it is about getting in touch with the nonhuman condition of the computers that work as our interlocutors and infrastructure. On the other hand, it is about the computers getting in touch with the more-than-human world around them.

Following alternative cybernetics, I believe that the world is not a closed jar but an open ecosystem of intelligence, always changing. I believe that our brain is not the limit of consciousness. And I believe that understanding oneself as interconnected with the wider environment, organic and synthetic alike, marks a profound shift in subjectivity: one beyond anthropocentrism and individualism.