Robin Milner: Geek of the Week (original) (raw)
From ‘one of the smartest men I know’ to ‘an undoubted genius’, there are so many plaudits pinging around the internet about Robin Milner, that sometimes it can feel like an amphetamine-fuelled game of Pong.
Over a period of thirty years his contributions to computer science have ranged from developing semantic foundations of programming languages to programming language design and the theory of concurrency in the tradition of Dana Scott and the no less remarkable Christopher Strachey.
Perhaps though he is best known for the development of one of the first industry-scale languages, Standard ML, a widely used programming language and the parent of a number of major dialects including Ocaml and F# – the language developed by Don Syme and his team at Microsoft Research in Cambridge which supports their .NET platform.
Robin Milner’s love of maths stretches back almost six decades when as a pupil at Eton College he was lucky enough to have an inspirational teacher who taught him pure mathematics and projective geometry.
‘He found maths so beautiful that we found it beautiful, too. That was a good experience.’
After Eton he took a scholarship to King’s College Cambridge then went off to do his national service in the Royal Engineers.
After the army he was employed at Ferranti the once bright beacon of the British computer industry and after a few years decided to get an academic job, which he did at City University in 1963.
‘I had a kind of double life: I taught maths to engineers, which was very routine, but I also began to get interested in AI.
‘Oh yes, these were things one could do to get into the academic side of the subject. I wrote an automatic theorem prover in Swansea for myself and became shattered with the difficulty of doing anything interesting in that direction and I still am.’
He left academic life in the UK for a brief spell in the US where he worked alongside John McCarthy at the Artificial Intelligence Lab in Stanford then returned to work at the University of Edinburgh, leading its well known Computer Science Department for over twenty years and while there racked up a number of firsts.
The excitement is
as much in perceiving
new challenges from
the summit of a
foothill as it is
from reaching that summit.
These included devising CCS (Calculus for Communicating Systems), one of the first algebraic calculi for analysing concurrent systems, developing pi calculus, a basic model for mobile communications and he had more than a hand in developing LCF (Logic for Computable Functions), a system for machine-assisted formal reasoning.
At Edinburgh he founded >a research institute called the Laboratory for Foundation of Computer Science. In 1988 he was elected Fellow of the Royal Society and three years later was awarded the Turing Award.
After leaving Edinburgh University in 1995, Robin was appointed Professor of Computer Science at Cambridge University and was Head of the Computer Laboratory from January 1996 to October 1999. He is now Professor Emeritus of Computer Science at Cambridge and since March 2009 is also a part-time Professor at the Informatics Forum in the University of Edinburgh.
RM:
When did you realise that ML was ready to be used as a general language?
MILNER:
In Edinburgh in the mid 1970s and two things helped. First, ML had learned a lot from POP2, the language designed by Robin Burstall and Rod Popplestone in the AI department at the university, and POP2 was under experimental use for AI purposes.
Second, at some point Luca Cardelli implemented ML on the machine we used in the CS department for teaching second-year undergraduates — and someone (I forget who) took advantage of this and taught it.
RM:
If you had the chance to re-design ML, would you do anything differently? And why was it called ‘Standard ML’?
MILNER:
A lot was done when we went from ML to Standard ML, which was formally defined and published in 1990. This was largely enrichment. By the way we called it `Standard’ ML to prevent some organisation stepping in to standardise it.
So many good people spent so much time on designing Standard ML that it became remarkably stable. But one or two things, especially the brilliant idea of value polymorphism by Andrew Wright, were incorporated in the 1997 revision.
RM:
When you developed ML and CCF and the pi-calculus, did you search for something that you didn’t know about your intelligence?
MILNER:
For ML, the initial goal was clear: to have a programming language to help you prove theorems in such a way that you would formally know that the proofs were sound.
We didn’t know what `formally’ would mean. It turned out to mean that type-checking was rigorous. Many of the ideas needed for this, such as Curry’s and Hindley’s type system and the programming notion of abstract type were already around. You could say that we didn’t know at the start that those were the things we needed.
Process calculi (CCS and pi-calculus etc) were a bigger step in the dark for me. This was a search for something really new: a mathematical treatment of communicating and concurrent entities that could rival the known theories of sequential programming. Although Petri nets existed from the 1960s, the elusive question was how to build complex behaviours from simpler ones, in a modular way.
This kind of theory is still developing. For me, it was and still remains a wonderful challenge. It amounts to a search for foundations for the informatic systems that live not only inside computers but in the real world (whether naturally or embedded by us). We must aim to model the informatic aspect of the real world just as seriously as other scientists model other aspects.
RM:
How important has self-confidence been in your work?
MILNER:
Very important. It’s not confidence in having exceptional knowledge or technique; it’s trust that one can pose problems that don’t follow the fashion. After that, it’s a matter of persistence! The confidence lends you the urge to keep on at such a problem, even when others don’t attach value to solving it.
RM:
You’ve worked with Simon Peyton Jones and Sir Tony Hoare who are both at Cambridge Labs. Did you feel a sense of competition with them?
MILNER:
I’ve talked with them, much less with Simon than with Tony. The reason is that I stopped working on programming language design a long time ago; and in ML I was more focussed on establishing a formal definition that would remain unchanged, whereas Simon – quite rightly and fruitfully – aims at conceptual development through the medium of a changing language. I’m sure we agree that it’s hard to do both at once, and that the two aims are complementary.
Tony Hoare and I made proposals for concurrency at around the same time, 1978. We have certainly competed, but the competition has been fertile because, more and more, we can distinguish complementary aims.
Tony’s system for refining specifications into programs has been successful and influential. For modelling things that are not programs, e.g. in biology or in ubiquitous systems such as a distributed system for controlling driverless traffic, different concepts are prominent. So, as informatics expands outwards from individual computers via networks, wireless, sensors etc the necessary richness of modelling notions far exceeds what we perceived thirty years ago.
RM:
Ferranti, who you joined after leaving university, had influential collaborations with the university computing departments at Manchester and Cambridge. Was it these alliances that made you decide on an academic career rather than a commercial one?
MILNER:
Not really. It was much more the growing richness of computing ideas, including (for me, then) the relation between mathematical logic and what began to be called artificial intelligence.
RM:
Did you achieve what you hoped to achieve when you set up the Laboratory for the Foundations of Computer Science’ (LFCS) while at Edinburgh University?
MILNER:
In one sense, definitely. It was really four of us; Rod Burstall, Gordon Plotkin, Matthew Hennessy and me. The Laboratory for the Foundations of Computer Science’ purposely juxtaposes the words laboratory and foundations. The idea was that we would develop theories on the basis of practical experience (or experiment, which means the same in French).
The Lab still thrives, and has generated a wealth of theories. I wish we had been able to link it sooner to industrial experience. But the industry has been over-preoccupied with the market, at the expense of rigorous design. It would have been nice if the Y2K problem had not caused immense expense trying to avoid the predicted possible disaster.
Lack of disaster could have been safely assured by the use of rigorous programming notions that had been around for 20 years.
RM:
Why the panic then with Y2K? A media frenzy or corporate insanity?
MILNER:
I think it was knowledge that systems had been built without rigorous underpinning, and that the rigour could not be retrofitted in time.Both true. This led to a corporate sense of helplessness.
RM:
You worked briefly with John McCarthy at Stanford Artificial Intelligence Laboratory. When it was formed in 1964 he told his Pentagon backers that the construction of an artificially intelligent machine would take about a decade. Four decades AI is beginning to look like the modern version of the philosopher’s stone or is it? How much nearer are we to a machine that betters human intelligence?
MILNER:
It begins to seem that there are grades of intelligence. People now talk of ubiquitous computing systems that will predict our needs and serve them without our intervention. They will `discover’ what is around them; wandering agents will `negotiate’ with others to allocate computational resources enabling them to achieve their `goals’. Such agents will `trust’ or `mistrust’ one another. These anthropomorphic words are understood technically. Where along the spectrum do we declare `This is Intelligence’?
RM:
You received the ACM Turing Award in 1991 and I’d like to ask you a question about Alan Turing. Earlier this year Gordon Brown, followed Tony Blair’s example and apologised for things that he had nothing to do with. In Blair’s case it was the Irish potato famine; with Brown it was the hounding of Alan Turing? Do you think it was a worthwhile gesture or a cynical piece of public relations work?
MILNER:
Definitely a worthwhile gesture. If we believe that our society has progressed from bad to less bad ideas, then as a society we have a duty to express regret for the old ideas, and it does no harm for this regret (not blame) to be acknowledged by a leader, as well as by the rest if us.
RM:
The international Grand Challenge of the Human Genome project triggered a shift in the research ethos of biologists towards the larger scale and the longer term accumulation of knowledge. Do you think a similar breakthrough is now in prospect for the verification of software?
MILNER:
I talked above about our industry’s preoccupation with the market. We now admit that legacy software is stagnant and inscrutable. This admission may not suffice to ensure the verified design of new even more complex software, as in ubiquitous systems. One of the UK’s Grand Challenges for Computing Research is called ‘Ubiquitous Systems: Experience, Design and Science’. It’s forming a community that will try to unify these three aspects of modern systems. We would love to succeed.
RM:
William Faulkner the Nobel prize-winning author said of writers ‘All of us failed to match our dreams of perfection.’ Would you put yourself in this category
MILNER:
Dreams of perfection, for a scientist, mean simply ‘avenues for exploration’. The excitement is as much in perceiving new challenges from the summit of a foothill as it is from reaching that summit.
RM:
Is there anything you haven’t researched in computer science that you would like to?
MILNER:
I’m working on theoretical models that aim to give us the ability to engineer and analyse ubiquitous systems more soundly than has often been the case for present-day software. I would dearly love to follow this kind of work through to the front line of design and experience, but I don’t think I’ll be around for long enough. I’m making up for it by listening and talking to those who will be!