At Riot, our two primary languages for services are Java and Go. As a result, both languages are viewed as first class citizens in terms of support – and because we deploy using containers, both are interoperable and relatively easy to package and deploy. We love Go at Riot for a number of reasons including:
There’s also been a recent movement within the tech industry around Go, especially with regards to microservices, and it helps being able to tap into that interest and drive in the developer space. It’s also becoming increasingly popular in the system space for software like etcd, Docker, Kubernetes, Prometheus, and much more. There are excellent libraries for structured logging, consensus algorithms, and websockets. Additionally, the standard library includes things like TLS and SQL support, so you can be very productive in Go very quickly.
The Service Lifecycle team’s primary project is our deployment tool, which is used to deploy and manage the lifecycle of services running in our Docker runtime. If you’ve read our earlier “Running Online Services” series you’ll get a better idea of the problem space we’re working in. Our deployment tool is written in Go because it enables us to quickly roll out updates, onboard new engineers to our tech stack, and quickly iterate from early development to production. It is backed by MySQL and a single instance can target multiple datacenter locations. There are a number of challenges that Go makes it easier for us to solve including:
Our deployment tool operates on a custom YAML specification that describes what an app needs to run. There are several third party Go libraries which implement JSONSchema for us. Go also provides native support for Marshaling and Unmarshaling Go structs into JSON as well as third party support for YAML.
A structured YAML schema that the deployment tool might consume.
Our tool connects with a number of other microservices for things like service discovery, logging, alerts, configuration management, provisioning databases, and more. The primary method of communication is HTTP requests. This means we often have to consider things such as the lifecycle of the request, internet blips, timeouts, and more. Fortunately, Go provides a very solid HTTP client with some defaults you’ll definitely want to tweak. For example, the client will never timeout by default.
Performing an HTTP request and printing the body of the response.
Oftentimes data centers can be isolated through additional layers of security, especially when working with partner regions. One very useful aspect of Go we’ve used for multiple projects is the Go httputil reverse proxy. This allows us to quickly proxy requests, add middleware for the lifecycle of requests to inject additional authentication or headers, and make everything relatively transparent to clients.
At Riot, we must interface with a variety of third party services including Hashicorp Vault, DCOS, AWS, and Kubernetes. Most of these solutions provide native API client libraries for use by Go applications. Sometimes we use or fork third party libraries depending on our need as well. In all cases, we’ve been able to find adequate support for our needs.
Additionally, during development, it’s easy for us to recompile and run a local version of our deployment tool for quick testing or debugging. It also allows us to easily share code and libraries with other teams in our space.
Now that we’ve taken a look at how my team uses Go for deployment, let’s take a look at two other examples.
Hi, I’m Chad Wyszynski from the RDX Operability team, and I’d like to show you how my team uses Go to minimize request latency in our operational monitoring pipeline. Most of Riot’s logs and metrics flow through my team’s monitoring service. It’s a constant, high volume of traffic that spikes higher when something goes wrong, so the service must maintain high throughput and low latency. Who wants to wait seconds to log an error? Go channels help us meet these requirements.
The operational monitoring service exists for one purpose: to forward logs and metrics to backend observability platforms, such as New Relic. The service first transforms request data into the format expected by the backend platform, then it forwards the transformed data to that platform. Both of these steps are time consuming. Instead of forcing clients to wait, the service places request data into a bounded channel for processing by another Goroutine. This allows the service to respond to the client almost immediately.
But what happens when the bounded channel is full? By default, a Goroutine will block until the channel can accept data. We use Go’s time.After to bound this wait. If the channel can’t accept request data before the timeout, the service 503’s. Clients can retry the request later, hopefully after some exponential backoff.
The real win with the channel-based design came when migrating from one observability backend to another. Riot recently moved all metrics and logs from a hand-rolled pipeline to New Relic. The operational monitoring service had to forward data to both backends while teams configured dashboards and alerts on the new platform. Thanks to Go channels, dual-sending added essentially no latency to client requests. Our service just added request data to another bounded channel. The max server response time, then, was based on the time a Goroutine waited to put data onto a destination channel, not how long it took a destination server to respond.
I was new to Go when I joined Riot, so I was excited to see a practical use case for channels and Goroutines. My colleague Ayse Gokmen designed the original workflow; I’m stoked to share our work.
Justin O’Brien here from the Competitive team on Valorant! My team uses Go for all our backend services – as do all feature teams on Valorant. Our entire backend microservice architecture is built using Golang. This means that everything from spinning up and managing a game server process to purchasing items is all done using services written in Go. Though there have been many benefits to using Golang for all our services, I’m going to talk about three specific language features: concurrency primitives, implicit interfaces, and package modularity.
We leverage Golang concurrency primitives in order to add back pressure when operations start slowing down, to parallelize independent operations, and to run background processes within our applications. One example of this is we often find ourselves in a chain of execution on a match but need to do something for each player, loading skin data for each player when starting a match for example. Our requirements for a shared function to accomplish this were to return once all subroutines were finished executing and return back a list of any errors that occurred.
func Execute(funcList func() error) error
We accomplished this by using two channels and a waitgroup. One channel was to capture the errors as each thunk executed, while the other was a finished channel that a Goroutine sent on when the waitgroup finished. The language features made this very common pattern straightforward to implement.
Another language feature we use extensively is implicit interfaces. We leverage them pretty heavily to test our code and as a tool to create modular code. For example, we set out early on that we would have a common datastore interface in all our services. This is an interface that every one of our services use in order to interact with a data source.
This simple interface allowed us to implement many different backends in order to accomplish different things. We typically use an in-memory implementation for most of our tests and the small interface makes it very lightweight to implement inline in a test file for unique cases like access counts or to test our error handling. We also use a mixture of SQL and Redis for our services and have an implementation for both using this interface. This makes attaching a datastore to a new service particularly easy and also makes the ability to add more specific cases, like a write-through in memory cache backed by redis, also very possible.
Lastly, something I would like to call out that isn’t necessarily a language feature is the wide selection of available third party packages that often can be used interchangeably with common builtin packages. This has helped us make changes that I would expect to be a larger refactor very small because of the modular nature of golang packages. For example, a few of our services were spending a lot of CPU cycles serializing and deserializing JSON. We used Golang’s out-of-the-box json package when first writing all our services. This works for 95% of use cases and typically JSON serialization does not show up on a flame graph (which now that I think of it golang’s built-in profiling tools are top notch as well). There were a few cases specifically around serializing large objects where a lot of a service’s time was spent in the json serializer. We set out to optimize and turns out there are many alternative third-party JSON packages that are compatible with the built in package. This made the change as easy as changing this line:
Afterwards, any calls to the JSON library used the third-party library which made profiling and testing different packages easy.
Aaron back again! Now that we’ve taken a look at some Golang use cases across Riot, I’d like to show you how we’re all connected. The flexibility teams have when choosing tech stacks relies on the collaborative environment of Rioter technologists.
Riot Games is a very social company, and our Tech department encourages Rioters to engage with learning and development communities. For example, our various Communities of Practice enable groups of Rioters with common interests to gather regularly to learn and share together. One of most active technical communities is the Go community, which I currently run. There’s a Slack channel to discuss new proposals, and we have a monthly meetup where members present either a topic they’re aware of or learning about, or Riot projects written in Go.
We also aspire to involve the community outside of Riot with talks from open source library maintainers. The CoP is also a place to coordinate changes that impact multiple teams such as discussions around security when the module mirror launched. There are also discussions around bumping build containers, dealing with gotchas that we may encounter, or asking general questions about approach, tooling, or libraries to seek out individual expertise in another part of the org.
I personally love having a channel consisting of Go enthusiasts across teams and disciplines to bounce ideas, discuss language changes, and share libraries we come across. This channel was the central point of discussion as we transitioned from old dependency solutions to Go modules and it’s a great way to meet engineers who are passionate about the language.
The Go CoP’s flier.
At Riot, a number of teams maintain services and tools written in the Go language. Go provides a robust standard library and great third party community support to help satisfy our development needs.
Our Community of Practice is a great way for developers to contribute to Go use at Riot and share their learnings and experiences. We’re excited about the future of Go at Riot, with the ability to stay flexible and highly communicative across the entire company.
Thanks for reading! Feel free to post any questions or comments below.