Shard is an RTS AI for the spring engine. Based on a mix of C++ and lua, it is the official AI of the game Evolution RTS, and the successor of the NTai project. Here is a video showing 4 humans being defeated by 4 Shard AIs ( these humans represent the creator of the game and several experienced players struggling ):
Shards strategy is to expand quickly and attack as often as possible in larger numbers.
Shard supports multiple games, and has the potential to support multiple game engines. It does this by abstracting away the engine differences, and providing a common API to an isolated Lua scripting environment. This also allows game developers to build and extend their own AIs without learning C++
Having had a previous open source AI project, I analysed the usage by the developer community of my previous AI and the competition, listing the strengths and weaknesses in performance and functionality. I then devised a basic architecture involving individual modules that took care of attack, construction for individual units (e.g. tanks and planes), created UML diagrams, and discussed with experienced professors and colleagues.
Having selected a target game and approached the content developer to cooperate with optimising performance, I set about constructing the AI. I worked iteratively releasing an internal build on a regular basis to improve performance and add features. Once it was feature complete, I made a public release and continued iterating to improve performance. For example, new behaviours were added such as builders constructing defence turrets when under attack.
Engine Abstraction and Multiple Games
A major issue at the time was that the game engine being used had a scripted environment to implement arbitrary game rules, rules the existing AIs could not see. This inflexibility resulted in a lot of games being unsupported. I identified this issue at an early stage and switched from a static C++ AI to a hybrid AI, by exposing a lua scripting environment for game developers to extend.
QA & Testing
Due to the nature of the problem unit testing could never address everything the AI needed to do. While it was possible to test the issuing of orders, it was difficult to test the higher level logic for every case. To ensure the AI performed correctly and to a high standard, I created an automated testing script. The testing process repeatedly started a game between my AI, and one of several rival AIs that was known to work. The two would then play the game to conclusion and the winner logged to a spreadsheet.
When errors in my AI were logged, fixes were applied and testing was restarted. Performance was calculated as a ratio of wins to losses against the rival AIs, with a 100% win rate being the desirable outcome. Each test involved 200 games, employing several rival AIs within several distinct game environments.