Computational Abiogenesis Simulation

Поділитися
Вставка
  • Опубліковано 21 вер 2024
  • Abiogenesis simulation in python using pytorch. Inspired by arxiv.org/pdf/....
    The simulation begins with a grid filled with random numbers, where each spot holds a sequence of numbers that act like a simple program. Each number represents an action or piece of data that can be read, copied, or modified with every step of the simulation.
    From this initial random noise, self-replicating programs gradually emerge and evolve over time.
    Instructions:
    - Increment/decrement head0/1 position (3 dimensions)
    - Increment/decrement array value at head0/1 position
    - Copy value from head0/1 to head 1/0
    - Enter/exit loop conditioned on value at head0
    - No action
    Differences from the original BFF simulation:
    - Instruction sequences interact with neighbors through localized head operations instead of concatenation
    - Each instruction sequence has the same computation budget: one instruction per iteration
    Code: github.com/Neb...
    Arguments used in this simulation:
    python bff_2d.py --height 128 --width 256 --depth 64 --num_instructions 64 --num_sims 2000000

КОМЕНТАРІ • 4

  • @anotheral
    @anotheral 14 днів тому

    Found this via a recent Sean Carrol's Mindscape interview/ Would love to know a little more about what's being represented here.

    • @nebraskinator
      @nebraskinator  13 днів тому +1

      Sure! What you're seeing is a visualization of a grid of instruction sequences. Each 8x8 square in the grid represents a sequence of simple instructions, like moving a head, copying data from one head to another, incrementing values, and so on. These sequences of instructions are executed in order during each iteration of the simulation.
      Each instruction sequence has two read/write heads that can modify its own instructions and those of its immediate neighbors, leading to changes that can alter their behavior over time. This creates a dynamic environment where sequences are constantly evolving and editing their local area.
      In the visualization, each color corresponds to a different type of operation, while shades of grey indicate no operation (inactive instructions). The simulation starts with random values, but over time, you'll see self-replicating instruction sequences emerge, adapt, and evolve.

    • @anotheral
      @anotheral 13 днів тому

      ​@@nebraskinator It's pretty mind-blowing that this tends towards a metastable and complex state. Why would this do that, but not say, Conway's Life or one of Wolfram's automata?

    • @nebraskinator
      @nebraskinator  13 днів тому +1

      ​@@anotheral There are two key differences between this simulation and those environments. First, the instruction sequences are self-modifying, meaning their behavior can change over time based on their actions. Second, each location in the grid holds a richer type of data: behavior.
      In Conway’s Game of Life and Wolfram’s cellular automata, each position in the grid holds data that is updated according to fixed rules, but these positions do not hold executable instructions that can change over time. This static approach limits the range of emergent behaviors since the data itself doesn’t possess the capacity to evolve or adapt its function.
      In contrast, each position in this simulation holds an instruction set that actively governs its behavior and can be modified through self-interaction or interaction with neighboring sequences. This self-modifying nature allows the sequences to adapt and evolve, leading to more complex and metastable states. A key factor driving this metastability is replication-sequences that replicate more effectively can dominate, creating persistent, dynamic patterns that can self-correct and adapt in response to their environment.