Mark Greaves -- Artificial Intelligence and National Defense Research
What stage of your academic career were you in when you decided to seek work outside of philosophy? Describe the overlap between your academic and nonacademic careers, if any.
I started thinking about how to blend philosophy and computer science in the early years of my graduate studies. I’ve always had a serious interest in both disciplines. After my first two years in Stanford’s philosophy PhD program, I took a leave of absence to get a master’s in computer science at UCLA, and then came back to Stanford and finished a philosophy PhD with a specialty in logic and philosophy of language.
When I graduated, I had solid skills in analytic philosophy, firsthand knowledge of advanced computer architectures, and a budding interest in artificial intelligence (AI). I thought this would be an attractive combination for an assistant philosophy professor, but at least in the mid-90s when I was on the philosophy job market, academic philosophy departments didn’t have much interest in this combination of skills. Virtually all of the jobs on offer were in the traditional subcategories of academic philosophy. Also, my wife had a good job in Seattle, and we decided that we shouldn’t move unless I could find a really good fit. So, with some regret, I decided to look at non-philosophy jobs in the Seattle area.
What kind of work did you find, and how did you find it?
During the time I was working on my PhD, Stanford’s philosophy department had partnered with the computer science and linguistics departments to sponsor an interdisciplinary research organization called the Center for the Study of Language and Information (CSLI). CSLI was full of grad students, faculty, and companies who were pursuing interdisciplinary work across these disciplines. One of the partner companies of CSLI was Boeing, specifically Boeing’s R&D division in Seattle. Through networking and introductions facilitated by CSLI, I was offered a job as a research scientist in the Natural Language Processing group at Boeing in Seattle.
That first Boeing job set me on a course that I’ve followed my whole career. I stayed with Boeing for about five years, doing applied research based in the type of logic and formal semantics that I had pursued as a PhD student. One of the main funders of my work was the Defense Advanced Research Projects Agency (DARPA). DARPA is the premier blue-sky R&D laboratory for the US Department of Defense, and honestly one of the most amazing organizations in the US Government. In 2001, I was invited to join DARPA for a 4-year appointment as a program manager. DARPA was open to hiring a philosopher—in fact, the work they fund in advanced AI and computer science is rooted in nontraditional approaches—as long as I could deliver on the things I had promised. I built a good reputation at DARPA, and so after my DARPA tour was over, I moved back to Seattle to direct a collection of global R&D programs on behalf of Paul Allen, the billionaire co-founder of Microsoft. And now I am in a technical leadership role at one of the US National Laboratories. So I’ve built my career as a series of steps, all in some way leading or working on large research teams in AI and advanced computer science.
What is a blue-sky lab?
“Blue sky” is a term for research organizations that focus on high risk/high reward challenges, taking on extremely hard problems, failing often, but always focused on finding solutions with the potential for revolutionary impact. Many organizations claim to be blue-sky and risk-seeking, but when you look at their actual programs and tolerance for failure, they are basically incremental in their approach. DARPA is a true blue-sky organization. At DARPA, a good example of this approach is military stealth technology. The extremely hard problem was that in the 70s, it became fairly clear that surface-to-air missiles could reliably shoot down even the fastest and most maneuverable aircraft. The revolutionary idea was to make something as large and complexly-shaped as an airplane invisible to radar. Through persistent work and many failures, DARPA originated modern stealth technology.
What do you do now, and what's interesting about it?
I am currently Technical Director for Analytics in the National Security Directorate of the Pacific Northwest National Laboratory (PNNL). My current work is a balance between leading research teams of incredibly smart scientists, and working directly on problems that have real significance for the nation.
As for what’s interesting, that’s easy. PNNL is one of the major US national laboratories, originating in the Manhattan Project and currently overseen by the US Department of Energy. The national labs have developed into organizations that feel like quasi-universities—full of dedicated scientists who do research, attend conferences, publish frequently, work with visiting grad students and postdocs, and often have joint academic appointments—but they are also places where actual critical systems are engineered and built. PNNL’s priorities are explicitly driven by the way that scientific progress can serve US national needs. This leads to real a sense of mission at PNNL, which is one of the most rewarding parts of the job.
What does your career offer that wasn't available to you as a philosopher?
My current job with PNNL seems to have many of the most attractive parts of faculty work—the opportunity to write papers and participate in academic debate, work with very smart people, mentor students, and a certain freedom to find your own specialties and make your own reputation—along with ability to work with teams of people who are dedicated to turning research into results that make a difference for the nation. I no longer do philosophy in any active way; I mainly work with teams of computer scientists. However, in trying to work out problems in advanced artificial intelligence, we continually run into foundational philosophical concerns around action, causality, intention, belief, and language. So I still get to read philosophy papers and occasionally talk to people in philosophy departments. It’s a great life.
Would it make sense for me to ask you what kinds of programming skills or experience are most in demand and/or lend themselves to the most interesting kinds of work?
Programming languages and modeling techniques go in and out of fashion fairly rapidly—what was critically important 15 years ago is antique now. Currently I mostly use a large statistical package called R and a programming language called Python to test out AI ideas for data analysis, but some members of my team are starting to shift to Google’s recently-released TensorFlow AI package. So there’s no fixed answer to “get this kind of experience.” General fluency in AI and data science are currently in high demand, and I believe this level of demand will continue for at least another decade. While there is no getting around the need for some fluency in techniques of computer science and machine learning, there are some excellent online courses (e.g., the Coursera or EdX ones) which can get you started. I do think that philosophy training conveys a solid ability to write cogently and grapple with sophisticated arguments, which is surprisingly valuable. But in particular, training in cognitive science, metaphysics, philosophy of language, and philosophy of mind gives philosophers a valuable perspective on contemporary research in AI.