Iβm finding this very interestingβ¦ An evolved neural network that plays the game of tic-tac-toe and so far is a pretty decent player. Here is a visualization of itβs evolved βbrainβ that underwent GA (genetic algorithm) training with classification learning + self-play.
↳
In-reply-to
»
Over the past few weeks I've been experimenting with and doing some deep learning and researching into neutral networks and evolutionary adaptation of them. The thing is I haven't gotten very far. I've been able to build two different approaches so far with limited results. The frustrating part is that these things are so "random" it isn't even funny. Like I can't even get a basic ANN + GA to evolve a network that solves the XOR pattern every time with high levels of accuracy. π
β€ Read More
This is one of my attempts:
$ go build ./cmd/xor/... && ./xor
Generation 95 | Fitness: 0.999964 | Nodes: 9 | Conns: 19
Target reached!
Best network performance:
[0 0] β got=0 exp=0 (raw=0.000) β
[0 1] β got=1 exp=1 (raw=0.990) β
[1 0] β got=1 exp=1 (raw=0.716) β
[1 1] β got=0 exp=0 (raw=0.045) β
Overall accuracy: 100.0%
Wrote best.dot β render with `dot -Tpng best.dot -o best.png`