John Ku -- Programming (Commercial Websites and Artificial Intelligence)
When did you start thinking about leaving the PhD program at Michigan? Why were you thinking about leaving?
By my 6th year, I was thinking of leaving, probably even by my 5th year. I actually left around my 7th year. I don’t know if there was any one reason. I think there were lots of reasons. One of the big ones was the Machine Intelligence Research Institute (MIRI), or the Singularity Institute for Artificial Intelligence as it was known then. I think there is a significant probability of AI arriving relatively soon, especially relative to AI safety research. Once smarter than human AI does arrive, it’s likely to have an enormous impact and whatever values or goals it is programmed to seek will likely be realized very literally without regard to what outcome might have been intended. Getting the values right requires solving a lot of difficult philosophical problems. At the time, I found academia to be very conservative. If I brought up these issues, I’d just get laughed at.
You mean ethical problems? Or technological problems?
Both, really. And I think both have to be pursued in tandem. MIRI researchers have been focusing mostly on the technical problems: forget about having an ethical goal, just having some goal and having that goal remain stable is a huge mathematical problem that they’ve been working on.
Did you want to work more on the technological side as well?
I wanted to work on both. Even working on the ethical side of things, I think that the ethical questions surrounding AI have to be approached from a technical angle, which was rather foreign to philosophy at the time. I also felt like it was going to be a pretty big problem that I wouldn’t be able to solve right away. I certainly didn’t think I’d be able to
solve it by the time I needed to do a dissertation. So there was a bit of this publish or perish mentality that was driving some of this as well. I could have focused on some more normal philosophical problems and written something up. But I felt like time was ticking and I didn’t feel like changing my line of research.
Along with this I also realized that there is a lot more to being an academic than just doing research. I figured that I would rather do programming for a living than grading papers and attending committee meetings and everything else that comes along with being a professor other than just doing research. Lastly, I wanted to make and donate money.
What kind of work did you find?
To start off, I picked one of these newer programming languages I liked. I began freelancing while I was learning programming again because I hadn’t programmed in a while and I hadn’t done much web work before. Then I leveraged my personal network. My sister is a comedian, so she has some comedian friends. I did some web work for one of them. Then I found Metaspring. I was searching around and they were a local web development company that used the same language I had chosen. So I started up with them.
What were you doing for them?
Mostly programming, but eventually I took on more responsibilities, like programming for clients and then doing system administration work maintaining our servers and our clients’ servers. Eventually I became a co-owner of the company, but this was as we were losing employees—toward the end I was doing everything, actually, from the sales process to product management to hiring and firing people.
What came after Metaspring? What do you do now?
With my business partner’s permission, I took on one of our potential clients as my own as I was leaving Metaspring, so I was freelancing. And I moved to Berkeley to try a startup with a friend of a college friend. We were trying to make the company work, looking for investors. Our idea shifted around a lot but at one point it was to have a browser extension, like a sidebar, that overlays whatever you’re currently reading and brings up related information.
Eventually, my friend and I gave up on the startup and I went back to supporting myself as a freelancer. I joined my current company, Pistn, as Chief Technology Officer. We do websites and marketing for small businesses, mostly auto repair shops. This was the company I had been doing freelance work for.
It sounds like you find programming gratifying, that you want to do it independent of the AI problem—or at least you’re good at it.
Yeah, I find programming to be gratifying. Maybe in an ideal world, I’d like to do pure research, but it’s not too unideal.
How is the AI project going now?
I think it’s going pretty well. I am doing better philosophy work than I was when I was in academia. Maybe not as consistently, as in it’s off and on, done in my off time, but I think that I’ve succeeded in what I set out to do in terms of reducing the philosophical problems regarding metaethics and intentionality to merely technical problems. I think I have an outline for how an AI could take, say, a mathematical model of a human brain and then back out a function that matches to each circumstance the actions that a person should take. It’s basically recovering ethics from the human brain.
I model an agent as a society of decision subagents. This includes a normal first order decision agent that decides what to do based on its preferences. But agents can also deliberate about what to prefer so there are higher order decision agents that influence lower order preferences instead of actions. Given a network of such higher order decision agents, you can set any relevant beliefs to true and iteratively allow them to influence each other until you reach a stationary equilibrium. The first order utility function that results from that state would be the agent's rational utility function.
To make all of this precise, I rely on Chalmers' view of when a computation is instantiated by a physical system and develop a view of intentionality much like Dennett's, but more computational and perhaps more realist. Chalmers argues that a physical system instantiates a computation when you can map the input, output and internal states of the computation to similar divisions of physical states in such a way that any state transitions between computational states correspond to causal transitions between the physical states they are mapped to. Given a model of a brain, the AI could first filter all possible societies of decision agents by which ones meet Chalmers' condition and then roughly choose the one which best compresses the brain's behavior. You might also give some weight to a principle of charity that measures rationality or coherence, say by counting violations of axioms.
Are you satisfied with your path?
Yeah, I think so. One thing is I am pleasantly surprised by both how academia and public opinion has been catching up to my view on the importance of AI and AI safety. That originally had me leaving academia. So perhaps I didn’t give academics enough credit. It didn’t take quite as long for them to come around. It does make me question whether it was a mistake to leave: could I have been doing this within academia? But at the same time, getting into programming, the programs I was doing for work never perfectly aligned with my AI interests, but it made it easier to look into programming languages and techniques that are peripherally related to this AI research. Programming for a living made it easier to move into that mindset.
By now, I think I have enough original research that if I could get it written up, it would be enough to count for a dissertation, so sometimes I’ve thought of that. But I’m more interested in doing the research than writing it up.
Do you feel that way about programming, about getting the code right?
I find that a little bit easier. I’ve been trying to merge the two, writing up philosophy as code. There’s even a programming language based on set theory that could go hand and hand with this more mathematical computational approach. So I have been trying to write up my philosophy in set theory code.
Advice or parting words for other philosophers who are thinking of pursuing their interests outside of academia?
I think that I was lucky that I have these marketable skills to fall back on. But I think philosophers in general have a lot of marketable skills. One thing that is generalizable is that I went into programming knowing that it had a culture of being meritocratic. It doesn’t matter if you dropped out or don’t have the right degree, none of that really matters. If you can program and create software, generally people will recognize that. So whether or not it’s software development, it’s good to choose a part of the economy that is a meritocracy. I also tried some other stuff, like trying to start a company.
I even tried to start a philosophy nonprofit. You can take a look at it here (choose Academic Philosophy when prompted). The basic idea was to index philosophical publications by the arguments it contains. For each step of the argument, its agreement or disagreement with the arguments of other indexed publications was noted.
I think employers tend to look favorably on that kind of initiative and that it is something that is desirable in an employee. That was something I experienced from being on the hiring side.