Evolution simulation where evolutions are saved in DNA.

AI

Introduction

Problem statement

What is the problem?: The problem is that there are no open world games with a balanced dynamic NPC population where NPC’s live and survive on their own.

Who has this problem?: People who want to play open world games with a dynamic NPC population or developers who want to create an open world game with a dynamic NPC population.

When are they having this problem?: Game developers have this problem when they want to create a game with dynamic NPC populations.

Why are they having this problem?: According to ChatGPT there are 5 reasons why developers could have this problem. Technical complexity, Game design challenges, resource limitations, testing and debugging and player experience.

In what context does this problem exist?: This problem exists in games which need a dynamic NPC population or a game with a not properly working dynamic NPC population.

Solution space

What are the possible solutions? Possible solutions are decision making AI and learning AI. Doing research to other games and how npc’s behave there. Doing research to Game design cgallenges, resources limitations and player experience and how to implement those factors right in an algorithm for dynamic and balanced npc population.

What are common solutions? I did research to 20 games which have an npc population and almost all those games have player input to make the population do the right thing. Also the npc’s have bars for certain needs and decide what to do based on these bars in some of the games. Another good solution in my opinion is you can only grow population which occurs when you build new houses but kids are uselles they eat but they don’t work, to many kids lead to to little food but if you build to few houses you population ages and shrinks.

What are effective and efficient solutions? I think decision making AI is the most effective and efficient solution. I think this costs the least time to make and the NPC’s are immediately on a smart level to work.

what is the best solution for certain circumstances? If it doesn’t matter that it takes some time before the AI is smart you can use learning algorithms.

Design

How can I implement the best solution that solves the problem?: I’m making a simulation program which simulates an dynamic NPC population. The algorithm should be able to be limited for game design, resources limits and player experience. For example you don’t want to have to hard populations in the start of the game or too easy populations at the end of the game. The resource limits should also be tied to those camps. When an game designer or game designer student is interested in this algorithm they can use my product to see the capabilities of the algorithm. They can also change variables in the editor to see if it is really usable for them. Then they can decide to use this algorithm in their own game for dynamic NPC populations.

Week 1

I started of the first week with doing research to AI. I did research to 20 games which have AI populations. From those 20 games I found 5 games which have AI with factors that are also needed for the Utility AI or are kind of comparable with my algorithm but still missing some small things.

The sims: Characters in the sims will think for themselves when a player stops giving them commands. Sims have meters for needs. Objects in the sims house will broadcast what they give for example a bed broadcasts sleep on me to get 10 energy. Sims make a list of every object in the house and what they offer. The sim weigh each number of objects in the house and checks which it needs the most then adds a multiplier based on the sims motive levels. If a simp is well rested a bed gives less points but if a sim is not well rested it drastically boosts the amount of sleep points it gets. The list of things in the house get ranked and the sim does one of the top 3 interactions random.

Not all needs are equal. If a sim is bored and starving to death the sim should eat and not do something fun. This can be done by using different formula’s

Sims have personality which give them more points from certain things in example serious sims get more fun points for reading a book while fun sims get more from playing a game. These factors are also influenced by the ageclass of the sim and if he is visiting a house or in its own house. The sims AI doesn’t fulfill tasks completely because it chooses one of the top 3 tasks. The player needs to help with this. This is an Utility AI. A character has a predefined set of needs and looks around for an option which fulfill their needs best. In the sims a sim can be dropped in any house and act in a life like manner fulfilling its needs and socializing when its needs are fulfilled. There could be added 100 of objects without changing code of sims because of the objects broadcasting the points. Hunger get different during the day after just eaten for example.

Players actions get priority over AI actions.

Banished: You can only grow population which occurs when you build new houses but kids are uselles they eat but they don’t work, to many kids lead to to little food but if you build to few houses you population ages and shrinks. It’s a constant balancing act. If distance between house and work is to great citiziens become ineffictient workers. There is no money in banished.

Life is feudal: Theres a lot of talk online about how life is feudal is a complete rip off of banished https://www.youtube.com/watch?v=VEb6ODnhSUA . In both games the concept is the same. Select a random map. Get in the game with 10-15 npc people and try. Dyanmic seasons. Sometime later sometimes earlier

Foundation: You start with x npc’s player tells where to build buildings. There is no danger. Game is about building up your own medieval settlement. You see npc’s at work Laid back game

Rimworld: Skill per character get selected by back story. Player has to select what to do for example select area to collect all resources. Ways to get food: Hunting, farming and milking.

Dwarf fortress: You have skillpoints which you can give to dwarves to get specific jobs. For example 0 skillpoints in blacksmith is no blacksmith and 1 skill point is novice blacksmith etc. Dwarves can learn from each other’s skills. There are more skills then skill points so you have to choose. You also get points to buy starting gear and starting animals. NPC’s can get broken hands and not work for weeks.

From this research I can conclude that there is no game yet which has the specific system which I want to build. But I can use inspiration from these games for my algorithm.

I asked chat GPT: Are there any open world games with AI populations which reproduce and build houses by their self with no player input.

This was the response:

As of my last knowledge update in January 2022, there were no widely known open-world games with AI populations that autonomously reproduce and build houses without player input. Game development is a rapidly evolving field, and new releases or updates may have occurred since then.

However, it’s essential to note that creating a complex system where AI populations reproduce and build structures autonomously is a challenging task. While some games may feature dynamic ecosystems or NPC behaviors, the level of autonomy you described is quite advanced and would likely require sophisticated artificial intelligence systems.

If there have been any developments in this area, I recommend checking the latest gaming news, developer announcements, or community forums for updates on new games or features. Keep in mind that my information might be outdated, and there could be innovative titles released after my last update in January 2022.

After I learned more about these subjects I started programming. I started making a statemachine for agents.

Week 2

I started out week 2 with finishing the state machine from week 1. The lookformate state needed some improvements and bug fixes. Then I made buttons for speeding up or slowing down time in the simulation. I made the second version of the state machine:

I started implementing this statemachine but came to the conclusion that this is not what I wanted to have. For example the AI agents don’t use distance to x or time of day. AI algorithms which can take those factors in consideration are called Utility AI algorithms.

What if an AI agent is both starving to death and drying out? What if an AI agent has all its needs satisfied? Those questions are simply solved by an utility AI algorithm. An utility AI algorithm calculates what the best action is at any given time for any agent. The algorithm should not be predictable because predictable is boring. The algorithm returns a score between 0 and 1 for each action of an agent. This score is calculated by considerations.

Example: Melee attack always do a lot of damage but if no enemy is within range to attack then it has no utility to us because it can’t actually apply damage to anything. If an enemy is right next to us with low hp this melee attack is now extremely useful. If another enemy is also within attack range but has more hp our melee attack still has some utility because it can reduce the enemy health but it doesn’t have as much utility as the vulnerable enemy.

Utility AI only decides what action to take not what to do. Compatible with whatever techniques in your game. AI only decides which action, your own code does the rest.

Scoring: Distance to objects, needs. Ask for those data through a ‘clearing house’. Clearing house returns normalized values for data. For example. Should I eat? For example current food level = 58/100 and food given from apple is 60 points. Final score = x. Decide what to do using this. Each factor should be taken in this calculation.

Problem: at max health score = when low at 0.9-1. The lower the score the less the chance character is going to heal. But now there is a lot of chance character going to heal in the middle it should instead go like this.

With the lower graph the character is more possible to heal in more critical situations else it goes on and does other tasks.

Skipping factors: Make a variable which is 0 during the time you don’t want the agent do do this action. For example: If the AI should skip eating the first 3 years, make a variable which is 0 during the first 3 years. After those years make it 1. This makes final score 0 and then decision will never be chosen when the agent is younger then 3.

Refining: Always picking top score is too predictable / optimal. Apply some randomness for example chose an option within 0.1 of topscore or choose 1 of the top 3 options or personality based (agents with certain personalities do certain actions more / less) or bucketing (do certain things more in certain situations by puttings decisions in “buckets” with a score)

Started implementation of old state machine with new Utility AI system

[screenshots of code van score action, considerations en actions,

A consideration inherits from the standard consideration class:

public abstract class Consideration : ScriptableObject
{
    public string Name;

    private float _score;
    public float score
    {
        get { return _score; }
        set
        {
            this._score = Mathf.Clamp01(value);
        }
    }

    public virtual void Awake()
    {
        score = 0;
    }

    public abstract float ScoreConsideration(UtilityNPCAgent npc);
}

A consideration checks the current value / max value and maps it on a response curve. This is what a consideration looks like:

public class DrinkConsideration : Consideration
{
    [SerializeField] private AnimationCurve responseCurve;
    public override float ScoreConsideration(UtilityNPCAgent npc)
    {
        score = responseCurve.Evaluate(Mathf.Clamp01(npc.thirst / npc.maxThirst));
        return score;
    }
}

An action takes multiple considerations and is used to calculate the score for that action. An action inherits from the standard action class:

public abstract class Action : ScriptableObject
{
    public string Name;
    private float _score;
    public float score
    {
        get { return _score; }
        set
        {
            this._score = Mathf.Clamp01(value);
        }
    }

    public Consideration[] considerations;

    public Transform RequiredDestination { get; protected set; }

    public virtual void Awake()
    {
        score = 0;
    }

    public abstract void Execute(NPCController npcController);
    public abstract void SetRequiredDestination(NPCController npcController);
}

Inside an specific action class are the required destination for the action and the execute function for an action, for example the eat action:

public class Eat : Action
{
    public override void Execute(NPCController npc)
    {

        Debug.Log("I ate");
        npc.gameObject.GetComponent<UtilityNPCAgent>().eating = true;
        //npc.OnFinishedAction();
        npc.brain.finishedExecutingBestAction = true;
    }

    public override void SetRequiredDestination(NPCController npcController)
    {
        float distance = Mathf.Infinity;
        Transform nearestFood = null;


        List<Transform> foodList = new List<Transform>();
        for (int i = 0; i < GameManager.instance.foodList.Count; i++)
        {
            foodList.Add(GameManager.instance.foodList[i].transform); 
        }

        foreach(Transform t in foodList)
        {
            float distanceFromResource = Vector3.Distance(t.position, npcController.transform.position);
            if(distanceFromResource < distance)
            {
                nearestFood = t;
                distance = distanceFromResource;
            }
        }

        RequiredDestination = nearestFood;
    }
}

Both actions and considerations are scriptable objects. All consideration scriptable objects need to be put inside the action scriptable object. Inside the consideration scriptable object the response curve can be adjusted so that it is realistic for each consideration.

In this code the Agent decides what to do. An agent can do 3 things: Decide an action, move to destination and execute actions. Those things are made in a simple state machine.

    void FSMTick()
    {
        if (currentState == State.decide)
        {
            brain.DecideBestAction();

            if (Vector3.Distance(brain.bestAction.RequiredDestination.position, transform.position) < 2f)
            {
                currentState = State.execute;
            }
            else
            {
                currentState = State.move;
            }
        }
        else if (currentState == State.move)
        {
            if (Vector3.Distance(brain.bestAction.RequiredDestination.position, transform.position) < 2f)
            {
                currentState = State.execute;
            }
            else
            {
                this.transform.position = Vector3.MoveTowards(this.transform.position, new Vector3(brain.bestAction.RequiredDestination.position.x, this.transform.position.y, brain.bestAction.RequiredDestination.position.z), 1f * Time.deltaTime);
            }
        }
        else if (currentState == State.execute)
        {
            if (!brain.finishedExecutingBestAction)
            {
                Debug.Log("Executing action");
                brain.bestAction.Execute(this);
            }
            else if (brain.finishedExecutingBestAction)
            {
                Debug.Log("Exit executing state");
                currentState = State.decide;
            }
        }
    }

In this code the Agent decides the best action:

    public void DecideBestAction()
    {
        finishedExecutingBestAction = false;

        float score = 0f;
        int nextBestActionIndex = 0;
        for (int i = 0; i < actions.Length; i++)
        {
            if (ScoreAction(actions[i]) > score)
            {
                nextBestActionIndex = i;
                score = actions[i].score;
            }
            Debug.Log(i + "" + actions[i].score);
        }

        bestAction = actions[nextBestActionIndex];
        bestAction.SetRequiredDestination(npc);
        finishedDeciding = true;
    }

Week 3

I finished the implementation of the new Utility AI system. I made states for all old states and made their scores and considerations. I had a bug with scoring actions where food always was 1 and drink always 0. The agent keeps eating. In the screenshot below action the first digit is the index of the action and the second the score. As you can see food index 0 = score 1, drink index 2 = score 0 and another action index 1 = score 0.5699998.

I found that the problem was that the variables for food and drinks where integers so score always was a rounded number. After I fixed this there still was a bug because the AI does the action with the highest points but the scores are ‘reversed’. As you can see in the screenshot below hunger (index 0) returns 1 and the AI will look for food but it’s 4th action is lower.

This was fixed by doing the following 1 – score. The score is now ‘reversed’ and the action which is most needed will be done. The score function now looks like this:

    public float ScoreAction(Action action)
    {
        float score = 1f;
        for (int i = 0; i < action.considerations.Length; i++)
        {
            float considerationScore = action.considerations[i].ScoreConsideration(agent);
            score *= considerationScore;

            if (score == 0)
            {
                action.score = 0;
                return action.score;
            }
        }

        //Save score in new variable
        float originalScore = score;

        //Determine the factor of getting the variable scaled back to between 0 and 1
        float modFactor = 1 - (1 / action.considerations.Length);

        //original score * mod factor to make variable between 0 and 1
        float makeUpValue = (1 - originalScore) * modFactor;


        action.score = originalScore + (makeUpValue * originalScore);

        //Reverse value so that it can be used in the algorithm.
        action.score = 1 - action.score;

        return action.score;
    }

After I programmed this in Unity I did more research to artificial intelligence to add an extra layer to my simulation.

Machine learning algorithms are built to adapt to new circumstances and to detect and extrapolate patterns. Why would a machine need to be trained to solve problems?

1. Designers cannot anticipate all possible situations that the agent might find itself in.

2. Designers cannot anticipate all changes that could occur in a problem over time.

3. Sometimes human have no idea how to program a solution themselves.

Machine learning algorithms learn and adapts to information given and tries to extrapolate information from within it.

Different approaches
1. Unsupervised learning: feed data to system > apply algorithm > make observations from data. Useful when there is a lot of data to put into system but you don’t have a particular desired output, just interested to see what output it could determine. Figure out if there is anything interesting in the dataset.

2. Supervised learning: Feeding data into system and expected outcome to reproduce. Set of inputs and outputs. Expect system to learn relationship between them.

For example relationship = input x 4 = output. System then should take any input that wasn’t in the dataset you trained it with and still figure out the answer. Usefull when there is a lot data on a paticular phenomenon we can try and reproduce data in known and unknown situations.

3. Reinforcement learning: Environment where agent can interact and determine a reward for good and bad behavior.

4. Deep learning = creating neural networks using supervised+reinforcement learning.

Why is AI almost always roughly implemented (still some bugs). 2 types of AI:

  1. AI that learns to play a game: Machine learning AI trained outside the game itself. Spent weeks learning to become the very best player. Relies on data recorded from top tier humans. Once trained it can make decisions very quickly because it already knows how to respond to these situations. It isn’t part of the game itself but an external program which passes input actions just like a human player would.
  2. AI designed for a game: Created by team of programmers and designers during development process.

Building AI can be really challenging. AI decisions needs to be fast. In milliseconds. The action needs to make sense now. An AI agent cannot spends 3 seconds thinking about something. If it does by the time it comes up with a solution the situation has changed drastically and the solution is largely useless. Systems like Finite State machines and Behaviour Trees solve this problem  in two distinct ways: first they ensure that logic for possible actions is tighter and more explicit, it reduces the process of finding and selecting the correct action to a series of conditions.

This  is useful for a game designer, given you know why an AI is going to behave in the way that it does,  and can account for a variety of circumstances. But that in turn robs the AI of more intelligent or interesting decisions it could possibly make.

This is the biggest problem when creating a game with npc’s. The agents need to make intelligent decisions quickly. This needs to be done for eac NPC and if calculated optimally the game would have less performance and often result in sluggish behavior for what is meant to be an interactive medium. These problems still exist today. Games are still getting more complex as the technology continues to make significant gains and that makes it all that much harder to maintain the illusion of intelligence across the entirety of a gameplay experience.

An solution to this is AI that cheats and has access to knowledge which it shouldn’t have to fit the game more and operate more effectively. But players don’t want something that is too intelligent because that is not fun to play against. But this only applies to state machines and other older AI. Why can’t we just use machine learning? The game needs to be ready for machine learning to learn how to play within the rules. This creates the final machine learning agents which exactly know what to do when but if the game changes it could impact the quality of the game and the agents needs to be trained again. This doesn’t really work for a game that is in development, given you’re often fixing bugs, making adjustments or even finalizing and adding content at various stages of the development timeline.

Week 4

With natural selection you can’t know precise replication and death chances. Selection comes from interaction between traits and environment. In the example there is a plane and each morning food appears on the plane. Blobs come out each morning and look for food until they are running out of energy. If a blob fails to find food it will die, if a blob gets 1 food it will survive and if a blob gets 2 food it will reproduce itself.

A new rule gets added: After reproducing there is a 10% chance for a speed increase of a speed reduce. formula: energy cost = 1/time. Agents who are faster have a better chance to survive because they can eat earlier then slower agents. The population will evolve to have a higher average speed over time. But if speeds goes up the number of agents goes down. The agents compete more fiercely for food and are less efficient overall.

Two new traits gets added: Size in the video size lets you eat other agents when youre at least 20% larger than them but being big costs more energy. The energy cost now will be size^3*speed^2. Being big is high risk and high reward. The second trait that gets added is sense. Agents have a sense distance in which it can sense food and other creatures. Once food or smaller creatures are in sense radius it walks towards it to eat. Once a bigger creature is in sense radius the agent runs away. The formula for energy cost now is size^3*speed^2+senseradius. Using this formula the agents will search for an optimal size, speed and sense radius depending on the environment.

Week 5

A hashfunction is an algorithm that can be used to map any value to a value with a fixed-size. Hashing is a type of encryption and the outputs from a hashalgorithm are called hashes.

Deoxyribonucleic acid (DNA) is an organic chemical that contains genetic information. It is found in most cells of every organism. DNA is a key part of reproduction in which genetic heredity occurs through the passing down of DNA from parent or parents to offspring. DNA is made of 2 chains of nucleotides, which are intertwined in the shape of a double helix. A Nucleotide consists of 4 nitrogen bases : adenine, guanine, thymine and cytosine. The bases face each other where A always pairs with T and G always pair with C.

My first idea of an DNA function is to have a DNA string of 8 letters and make it to an hash using unity’s built in hash-function.

The input for this DNA is GTCGAATC (upper screenshot) and the output is the hash (lower screenshot). I started with this concept and made an basic string as starting DNA. This string is AAAAAAAA. When an agent reproduces itself each nucleotide has a 3% chance to mutate. An agent reproduces itself when it eats 2 food or more. An agent survives when it eats at least 1 food and dies when it doesn’t eat food. After all agents dont have anymore energy left to find food, the simulation checks which agents die, survive and reproduce and a new iteration starts.

Results

The simulation starts with 10 agents with the starting DNA AAAAAAAA

After 10 iterations you can easily see some agents getting smaller / bigger. Also if you watch the game you see some agents are earlier or later out of energy or some agents are faster.

After 100 iterations you see even more difference. The average agents in 100 iterations is 25,41 agents.

The environment was 25×25 with enough food so the NPC’s get larger. Imagine the NPC’s are now in a desert, a large environment where food is scarce. I changed the environment to be 50 x 50 and look at the results. On the first try all agents died at iteration 7.

After some more tries we can see that if the starting agents manage to survive we get more identical npc agents. Those agents have adapted themselves to living in this desert.

After this I changed the DNA sequence to be 26 characters long. The starting DNA sequence is 26xA The first 8 characters are for stats the last 18 are for color. So you have 6 characters for R, 6 for G and 6 for B. The first 3 Characters for a color are for making the value higher and the last 3 for making the value lower.

        private float DetermineAmountRGB(int startIndex, int genAmount)
    {
        float totalOfGen = 0;

        //For first 3 places get total gen score and add to total
        for (int i = startIndex; i < startIndex + genAmount / 2; i++)
        {
            totalOfGen += modValues[i];
        }

        //For last 3 places get total gen score and subtract from total
        for (int i = startIndex + genAmount / 2; i < startIndex + genAmount; i++)
        {
            totalOfGen -= modValues[i];
        }

        //Multiply total gen score with score per gen
        totalOfGen *= 255 / (genAmount * 3);

        //Totalgen will be a value between ~-127 and ~127 so if you add 128 you get the right value between 0 and 255 for RGB color 
        totalOfGen = 128 + totalOfGen;
        return totalOfGen;
    }

For now this code will average out the color but later, when agents react to environment, specific gens will survive longer because agents are more camouflaged. Other gens wont survive because agents dont blend in with the environment.

Agents eating each other. I made code for agents which checks if there are other agents in its sensors. If the closest agent is at least bigger then 20% of the current agent the agent will run away from. If the current agent is at least 20% bigger then the closest agent and there is no bigger agent in its sensors it will run towards the smaller agent and tries to eat it. If an agent eats the smaller agent it counts as 1 food.

An agent can be sensed by bigger agents as food. The chance of an agent getting sensed by an predator is determined by the terrain and agents. The color of the terrain and agent gets weighed out against each other and a score will be calculated. The higher this score is, the higher the color difference is and the higher the chance of being sensed by a bigger agent is. This is the code for changing the color based on environment:

private string ColorMutation(float thisColor, float otherColor, int color)
{
    int indexToChange;
    string stringToReturn = "";
    float difference = (thisColor - otherColor * 255);

    //Check which index to change from the DNA
    if (difference > 0)
    {
        double rNumber = Random.Range(0f, 1f);
        if (rNumber < 0.3333)
        {
            indexToChange = 3;
        }
        else if (rNumber < 0.6666)
        {
            indexToChange = 4;
        }
        else
        {
            indexToChange = 5;
        }
    }
    else
    {
        double rNumber = Random.Range(0f, 1f);
        if (rNumber < 0.3333)
        {
            indexToChange = 0;
        }
        else if (rNumber < 0.6666)
        {
            indexToChange = 1;
        }
        else
        {
            indexToChange = 2;
        }
    }

    //Loop through the DNA list and if the index is the index of the color which should be changed change it. Else keep the string as it is. Add it to the string to return
    for (int j = colorStartIndex + color; j < colorStartIndex + color + genAmount; j++)
    {
        if (j - (colorStartIndex + color) == indexToChange)
        {
            int mod = CharacterToModValue(stringToHash[j].ToString());
            mod = mod + 1;
            stringToReturn += ModToCharacterValue(mod);
        }
        else
        {
            stringToReturn += stringToHash[j].ToString();
        }

    }
    return stringToReturn;
}

Iteration 0 with a black floor, 30 food, 10 agents and the floor is 25×25:

Iteration 15:

Iteration 40:

~Iteration 100, all agents have the same color and are adapted to the terrain.

Iteration 0 with a green floor, 30 food, 10 agents and the floor is 25×25:

Iteration 15:

Iteration 40:

Iteration 100:

Conclusion: Agents need longer to adapt to colors which are more different from their starting color.

Generations

For this sprint I implemented generations. After 10 iterations, a new generation will start. When the generation starts, the user is shown 4 choices.

Button 1: Continue button: The game will continue, just like it would normally.

Button 2: Start over button: The game will continue from the start of this generation. This will give the user a look of how the same starting board will give different outcomes everytime.

Button 3: Open agent editor button: On pressing the button the user has two choices: go back to the screen with 4 buttons or open the agent editor button, which I will explain later in this article.

Also when this button is pressed the player can use the tooltip to select any agent. The tooltip now has 2 buttons underneath it. By pressing the choose button in the tooltip, a new generation will start using 10 times the selected agent as starting agents.

The user can also press the save button in the tooltip, this button saves the agent. The saved agents can be selected from the saved agents menu. This menu allows the user to cycle through all its saved agents, give a preview of the selected saved agent and a choose button which starts a new generation using 10 times the selected agent as starting agents. The DNA of saved agents is also shown in the menu, this shows that agents have a connection with DNA and allow users to see what effect DNA has on agents.

Button 4: Iteration editor: When this button is pressed, the user is greeted by 3 buttons.

The back button sends the user back to the screen with 4 buttons. The save this generation button is used to save the current state of the board. When the open button is pressed, the user sees all the saved generations.

When the user saves more generations, the arrows to cycle through generations will pop up and the user can cycle through its generations. When the user pressed the choose button he can view the whole generation again. In this screen the user is again greeted with 3 choices.

The back button brings the user back to the generation selection screen. The continue button continues the simulation from this state. And the compare iterations button opens the generation selection screen again. The user can now choose another generation and compare this generation with another generation.

This is one of the results. The comparison together with the other buttons can lead to interesting comparisons between different generations.

Sources

Game Maker’s Toolkit. (2023, 30 juni). The genius AI behind the Sims [Video]. YouTube. Consulted 7 march 2024, from https://www.youtube.com/watch?v=9gf2MT-IOsg

IGN. (2014, 21 februari). Banished review [Video]. YouTube. Consulted 7 march 2024, from https://www.youtube.com/watch?v=dxtNKoZEwco

Raptor. (2019, 6 juli). Banished | EP. 01 | Greatest City Construction Begins | Banished City Building Tycoon Gameplay [Video]. YouTube. Consulted 7 march 2024, from https://www.youtube.com/watch?v=oWHhl73oHj0

Age of PC. (2017, 1 februari). Banished vs Life is Feudal Forest Village : Differences and is it a ripoff? [Video]. YouTube. Consulted 7 march 2024, from https://www.youtube.com/watch?v=VEb6ODnhSUA

Mage’s Almanac. (2021, 21 maart). How to Play Life is Feudal: Forest Village [Video]. YouTube. Consulted 7 march 2024, from https://www.youtube.com/watch?v=cqHXjQY9S8Y

Worth A Buy. (2022, 13 april). Foundation Review – Nice looking City builder [Video]. YouTube. Consulted 7 march 2024, from https://www.youtube.com/watch?v=jOwWAcT2u0I

JeremyThunder. (2023, 8 december). Starting a New City — Foundation: The Riverlands — 01 [Video]. YouTube. Consulted 8 march 2024, from https://www.youtube.com/watch?v=dZZEBNgSAKU

SergiuHellDragoonHQ. (2021, 13 januari). RimWorld – Gameplay (PC/UHD) [Video]. YouTube. Consulted 8 march 2024, from https://www.youtube.com/watch?v=MzKP3TV0Qzw

ChrisCasts. (2023, 9 april). Dwarf Fortress VS Rimworld: A Tale of Two Simulators [Video]. YouTube. Consulted 8 march 2024, from https://www.youtube.com/watch?v=bDSsPNLpMes

Noclip – Video Game Documentaries. (2020, 9 juni). Dwarf Fortress Creator Explains its Complexity & Origins | Noclip Interview [Video]. YouTube. Consulted 8 march 2024, from https://www.youtube.com/watch?v=VAhHkJQ3KgY

Blind. (2023, 5 juli). Dwarf Fortress – Getting Started (Beginners Guide / Tutorial) [Video]. YouTube. Consulted 8 march 2024, from https://www.youtube.com/watch?v=93pTwS6mIoc

The Shaggy Dev. (2023, 20 april). Utility AI [Video Playlist]. YouTube. Consulted 12, 13 march 2024, from https://www.youtube.com/watch?v=ejKrvhusU1I&list=PLDpv2FF85TOp2KpIGcrxXY1POzfYLGWIb&index=1

AI and Games. (2023, 18 april). How Machine Learning is Transforming the Video Games Industry | AI 101 [Video]. YouTube. Consulted 21 march 2024, from https://www.youtube.com/watch?v=dm_yY-hddvE

AI and Games. (2024, 24 januari). Why is It Difficult to Make Good AI for Games? [Video]. YouTube. Consulted 21 march 2024, from https://www.youtube.com/watch?v=qCkqpRnk1oU

Primer. (2018, 15 november). Simulating natural selection [Video]. YouTube. Consulted 28 march 2024, from https://www.youtube.com/watch?v=0ZGbIKd0XrM

Unity Technologies. (z.d.). Unity – Scripting API: Hash128. Consulted 2 april 2024, from https://docs.unity3d.com/ScriptReference/Hash128.html

The Editors of Encyclopaedia Britannica. (2024, 7 april). DNA | Definition, Discovery, Function, Bases, Facts, & Structure. Consulted 2 april 2024, from Encyclopedia Britannica. https://www.britannica.com/science/DNA

      

2 thoughts on “Evolution simulation where evolutions are saved in DNA.

  1. Vervolgstappen en overwegingen voor 27 maart ’24:

    1. Wat is nu precies het einddoel wat je probeert te bereiken en hoe kom je daar?
    2. Wat zijn verschillende onderzoeken/games/simulaties die dit ook doen?
    3. Genetics algoritm gebruiken om op te slaan wat een specifieke agent of groep doet om te overleven en hierop verder te bouwen.
    4. Fuzzy logics is ook een mogelijke manier om simpele wants te vertalen naar actions.
    5. Houd het simpel 🙂

Geef een reactie

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *