My chess computers

A few steps backward...

1978, a high school friend of mine owned a Chess Challenger 10, and that was the first time I encountered a chess computer. A marvelous device coming from the US, very impressive but so costy, out of reach... At this time I also heard about its challenger Boris, but never approached one.

1979, last high school year before the general certificate of education, another mate with a wealthy family owns a brand new Apple II, a rather elitist computer at this time. He trusts it unbeatable at chess with its Sargon II program. I offer him to play a game against the highest level... Several hours thinking, are you saying? No problem, let's play one move per day; we meet at high school each and every day... and I beat the "monster". Once graduate, I get my first device: the
Fidelity Chess Challenger 7, offering a playing strength similar to the CC10 one at a much lower cost. I played so many games with this computer... I played some correspondence chess tounaments at this time and, despite its weak level, the CC7 helped me sort out some tacticals to check my strategy... A funny thing, with regards to current analysis power delivered by recent engines.

1980, I bought a secondhand TRS-80 model 1, basic level 2, featuring a Z80 processor with a 1.77Mhz clock (!). I upgrade it from 16K RAM to 48K thanks to an expansion interface, enabling as well to get rid of the tape recorder (equipped with a meter, an important feature at this time !) replaced by two 5.25" floppy drives. What about the resulting storage? 100Kb in the second drive, less in the first one as some space was used by the operating system (leaving roughly 80Kb for use on this drive, if I remember well). I successfully fit an overclocking kit on it, resulting in 50% speed-up, thus a 2.66Mhz Z80. The kit is smartly designed, when a drive access signal is triggered the clock gets back to normal speed - without this feature the drive read/writes would fail. A small manual switch provides ability to turn the speed-up off, a useful feature for instance when playing video games. Why do I describe all that stuff? Well, this will trigger the end of my CC7 and, for some time, of my interest with dedicated chess computers. I soon get various chess programs designed for the TRS-80, of course including the famous Sargon II, but Sfinks and MyChess as well, they outclass the CC7 despite its speed advantage (featuring a 4Mhz Z80, although I later was made aware its clock is actually more modestly set at 3.6Mhz). Proud of my successful TRS-80 overclocking, I tried a quartz substitution with the CC7. He never recovered this one...

1985, once fully graduate I have sold my TRS-80 back to get the funding for my first motorbike, I have completed my military service and I got taken on my first job since a few months. I miss chess computing, so I buy a
Fidelity Excellence. Once again a top quality/price balance, but this Spracklen's program (from Sargon III family) improved a lot and runs on a 3Mhz 6502 microprocessor, that is to say three times faster than the Apple II... a tough nut to crack. Too strong for me. I will play a lot less with the Excellence than with the CC7, loosing is not as fun as winning...

2010, I still own my Excellence, chess engines are able to play at astronomical level; nostalgia drives me on buying a second hand CC7, the one from my youthtime, about 30 years later. What a fun! I immediately remind its style, it is just like meeting an old mate again. I need to find opponents for it; costy chess computers from the eighties ot nineties are now offered as pre-owned for a few tens euros... And this is how one is going to find oneself owning a small collection of dedicated chess computers.

My 'beginner level, class E and lower' chess computers
My 'occasional player level, class D' chess computers
My 'weak club player, class C level' chess computers
My 'average club player, class B level' chess computers
My 'strong club player, class A level' chess computers
My 'expert player, candidate master and higher level' chess computers

Details about data provided:

Elo level: relative scale of computers strength, computed thru programs tournaments played at 15 seconds per move pace, thus 40 moves within 10 minutes. The Elo scale I manage (values are evolving continuously according to tournaments) is fixed to an 'anchor' I choosed, which is the Fidelity Electronics 3Mhz Excellence, according to several reasons:
- it is a device I own since more than thirty years,
- at the time it appeared, players and computers mixed tournaments existed, so Elo levels were quite well known (I am not talking about marketing claims from the constructors!)
As a personal conventional choice, I assigned the Excellence the 1780 Elo level, that is to say a good class B level. Assuming the Excellence wins or looses some points at the end of a tournament, I shift the whole Elo table with the required number of negative or positive points to get the anchor back to 1780.
Classes listed above (they are commonly used) are spaced out using 200 points intervals and, for guidance, a player dominating his opponent with 200 more Elo points has odds in his favor to win roughly 3 games out of 4. Assuming a 400 points advantage, thus two classes, the stronger player is expected to win 10 games to 1 (or, for example, 9 wins and two draws, as a draw awards 0.5 point to each player).

FIDE equivalence: the Elo system was firstly designed to give an insight of (human!) chess players level. Or, rather, to represent the relative levels they have inside an organization. Calculation systems may have some slight discrepancy across organizations, and the 'inbred' matches between players from an organization tend, along time, to result in an Elo ranking ending as a distinguishing feature of this organization. So, as an example, an USCF Elo (United States Chess Federation) cannot be directly compared to a FIDE Elo (Fédération Internationale Des Echecs, International Chess Federation). It is of course so with the Elo ranking resulting from computer matches. Not only we are observing a microcosm (similar to the organization concept), but, also, strengths and weaknesses at stake in computer matches are different than the ones involved in human games. A weak human player can be overcomputed by a weak program's tactical ability, while a strong player can overwhelm a strong program, leveraging his better strategical insight of the positions. Since some arithmetic formulas can translate Elo values across USCF and FIDE, such a formula can as well be designed to translate a chess computer's Elo into an indicative level which is meaningful to a ranked chess player.

CMhz: stands for '
Chess Mhz', this concept introduced by Eric Hallsworth in issue #34 of his 'Selective Search' magazine (June, July 1991) is useful for comparison purpose regarding our chess computers processors' power. Its reference value is 1 for a 6502 running at 1Mhz. Thus, an Apple II or a Commodore 64 are worth 1CMhz, and the 3Mhz Excellence is worth 3 on this processor power scale.

Rperf: stands for 'relative performance', uses the above concept of hardware power scaling to compute a pure software performance level. I use here again my Excellence 'anchor', to which I conventionally assign a 100% Rperf. So 100% is the chess playing skill of the Spracklen's program hosted by the Excellence. A 4Mhz Excellence (which I do not own) will show the same 100% Rperf. I compute the Rperf of a chess program as the ratio, displayed as a percentage, of the Elo measured for the considered program running on its own hardware, to the forecasted Elo the Excellence would achieve assuming same hardware power. I thus build a scaling which is agnostic from hardware, where a less developped program than the Excellence shows a less than 100% Rperf (for instance: Sargon II, 82%) and a more advanced one shows a Rperf higher than 100% (for instance: Elite Avant-Garde, 108%). The Par Excellence, successor of the Excellence and known to be quite stronger, benefits from its 5Mhz clock above all, as its 102% Rperf reveals a relatively low software enhancement, yet improved. The formula is, assuming a device with 'CMhz' power providing 'Elo' level:
Rperf = Elo / ( 1780 + Log10(CMhz/3) / Log10(2) * 60 ))
Explanation: 1780 is the established Elo level for the 'anchor' 3Mhz Excellence;
CMhz/3 is the CPU power ratio of the considered device to the Excellence (worth 3 CMhz), the ratio of base-10 log of previous value to base-10 log of 2 computes the number of times power is doubled from the Excellence to the considered device, and 60 is the (again conventionnally chosen) value of Elo gain for each computing power doubling.

KT : stands for "Khmelnitsky test", leveraging Scandien's idea; this deepened analysis will only be performed with some devices. Igor Khmelnitsky wrote the book: "Chess Exam and Training Guide: Rate Yourself and Learn How to Improve!"; this book provides 100 diagrams to be studied, with two questions per diagram. The much interesting aspect from this test is it highlights the player's strengths and weaknesses, granting points according to the answers; then selectively distributing the score on domains of skill involved with each diagram: endgame, middlegame, opening, calculations, standard positions (endgame), strategy, tactics, recognizing threats, attack, counterattack, defense, and sacrifices. Each question feeds three, four or five domain's scores across the twelve possible ones. Once you have answered the 200 questions, distribution tables let you convert the points percentage per domain into an estimated Elo level, and provide a global estimate as well. This estimate is considered reliable for human players; and as I myself performed the test, I can testify that I got a score accurately ranking me within the chess computers strength category that fits me best (weak club player, class C level). On another hand, I am not totally convinced with the relevance of the absolute estimate for chess computers: the test has neither been designed for them, nor the results have been standardized for such devices. So, why should I run such lengthy and rather complex test? Well, the relative values are much valuable. Let alone comparing two chess computers, the profile of the skills domains for an analyzed computer is very revealing, it will be materialized using a "net" graph. Process I use for the test:
- only chess computers being able to display their evaluation can be used, as a frequent request is to assess the position as either more or less balanced, or is white or black in a better position, or is one side winning. I chose a three minutes thinking time limit: the test recommends not to exceed twenty minutes analysis per diagram for a human player; but according to my own scale of values, a computer must be fast; and three minutes is the usual average time per move for a tournament game.
- a common second question is about the best move to play, amongst four suggested ones. At the end of the three minutes computing time, in addition to reading the score, I display or let play the best move so far. If it is one of the offered four, then the question number 2 is answered. If not, then I play each move from the four potential solutions and I consider the computer's evaluation; but unlike the first computing run, I let only one minute thinking time each (thus a total of four minutes computing time, to answer the second question).

- few scarce more complex or strategic questions require several moves from the computer in order to reveal its playing approach; in such case I let it play itself using fifteen seconds per move (the usual timing I set for playing).
- finally, I wish to make it clear that I perform the full test, so 100 diagrams and 200 questions.

Statistic and data

And what about non-electronic chess?