GitHub Copilot is My Copilot
I can’t resist blog post titles that would make a really stupid bumper sticker.
If you haven’t heard about GitHub Copilot, it is a plugin for Visual Studio Code that uses AI to autosuggest code as you type. It sounds too good to be true, so of course I signed up for the technical preview to find out for myself.
When I was eventually granted access, I turned it on without any hesitation. After a few minutes that were a mix of amazement and confusion, I turned it off.
Why? Honestly, I mostly found the suggestions it was making to my code disorienting. I’m obviously familiar with my typical thought process, but Copilot was trying to lead me in different directions. The frequency of suggestions was also distracting. The site describes Copilot as ‘your AI pair programmer.’ I’m not as comfortable with pair programming as I should be, but imagine if you were pair programming but your partner was suggesting something every time you typed.
The final issue that motivated me to turn it off was accuracy. I found that the things it was suggesting were typically not what I wanted. Following the examples shown on the site, I also tried providing guidance to Copilot via comments. I found that those comments had to be pretty elaborate, and even then it was mainly resulting in large blocks of code that weren’t what I wanted.
Since it was mainly distracting me and breaking my flow, Copilot had to go.
A funny thing happened after I turned it off though; I kept thinking about my new friend Copilot.
I was left thinking about the need to communicate with it via function signatures and clear comments. AI or no, that seemed like a positive habit that I should do more of. And while Copilot’s suggestions were overzealous, I did find myself missing the cases where the situations where it did accurately read my mind.
Since I kept thinking about it, I eventually gave up and turned copilot back on. Going back into it with my expectations tempered made a big difference. Rather than thinking of it as The Genie from Aladdin (either Robin Williams or Will Smith depending on your personal preference), seeing it as a slightly more intelligent autocomplete helped. I’ve learned to ignore when it is wrong, which is still often. But as it trains it is getting better. Copilot is now pretty good at suggesting complete lines when things are simple, and saving myself the typing still kind of feels like magic.
I’ve also found Copilot helpful in a few cases I didn’t anticipate. When I’m working in an area I’m somewhat new to (Typescript and Jest these days,) Copilot can often give me enough of a nudge to keep me from getting stuck. And most interestingly I’ve found that it can help me push through adding documentation when I’m not motivated to do so. Your first draft is always garbage, so why not let a robot take care of some of that for you. And having something on the screen that isn’t in your voice is a great motivator to revise it. Much better than staring at a blank README.
Positives aside, there are also legitimate ethical concerns with open source code being used to power Copilot’s AI. Open source code helping shape proprietary code seems problematic. And if you take things to their logical conclusion you could think of your code being consumed by Copilot, but then used in a software project that you find objectionable. Providing some way for repositories to opt out would help, but it seems like a difficult problem to solve.
I’ve already stuck with Copilot longer than I expected, but it is still not out of the question that I could just nope back out of it someday. Regardless, it has been a fun little experiment and has made me think about the code I write in some new ways.