Bjørn Staal

Bjørn Staal is an artist and software developer based in Oslo, Norway.

His work explores the dynamic interactions between computational systems, human perception & behavior. With more than a decade of experience in multidisciplinary design and software development, Staal co-founded the experimental art and design studio Void in 2015. Focused mainly on the development of large-scale interactive installations, Void has gained international recognition for its work at the intersection of design, architecture, technology, and art.

Since leaving the studio in 2023, Bjørn has focused on his own artistic practice, exploring how algorithms can enrich our understanding of what it means to be human in an age where more and more of our agency is being outsourced. Staal’s work has been exhibited nationally and internationally, as part of online and physical exhibitions as well as installations in public spaces. In September 2024, Staal had his first solo exhibition at Wintercircus in Ghent, Belgium.

Agent_0x00

2025

generative art, experiment, agentic AI

TypeScript, WebGL, AI, LangGraph

Agent0x00 is the codename for an ongoing research project on agentic and generative AI in combination with classical generative systems. The project employs an agentic workflow built using LangGraph that orchestrates multiple AI APIs including Anthropic, OpenAI, Huggingface, and Replicate in conjunction with Twitter's API to create autonomous content generation pipelines.

This system operates as a multi-step process that transforms real-time social media data into complex audiovisual outputs, demonstrating how contemporary AI agents can bridge social media analysis, textual generation, image synthesis, and interactive 3D visualization within a single automated workflow.

The system executes a comprehensive nine-step autonomous pipeline that demonstrates how agentic AI can navigate between different content modalities while maintaining thematic coherence:

  1. Scrape Twitter for latest news content,
  2. Generate AI-powered summary analysis of collected tweets
  3. Identify relevant biblical quotations based on summaries
  4. Generate corresponding image prompts derived from the textual analysis,
  5. Create images using generative models from these prompts
  6. Process generated images through Marigold depth estimation to create depth maps,
  7. Feed accumulated data (images, depth maps, and quotations) into generative script,
  8. Open browser environment and execute real-time WebGL particle simulations, and
  9. Render dynamic 3D point clouds with typographic treatments.

This progression from textual analysis through image generation to spatial data preparation establishes how contemporary AI agents can autonomously transform social media discourse into complex audiovisual outputs through multiple layers of computational interpretation.