Google Cloud Platform – Stackdriver profiler

How to Install Intellij IDEA on Windows 10

The Stackdriver profiler helps gather CPU usage and memory allocation information from your applications. This is different than Stackdriver monitoring, because with the Stackdriver profiler, you can tie the CPU usage and memory allocation attributes back to your application’s source code. This helps you identify parts of your application that consume the most resources and also allows you to check the performance of your code:

Let’s go over a hands-on exercise to explore and understand the Stackdriver profiler. In this lab, we will download a sample Go program and run it with the profiler enabled. We will then explore and use the profiler interface to capture data.

Let’s log into our GCP console and select a project. Go to the  Stackdriver | Profiler tab on the side bar and click on it:

You will see the Stackdriver Profiler main page:

The previous action also enables the Stackdriver profiler API.

Next, we open a cloudshell and download the sample Go program from GitHub:

@cloudshell:~ (stackdriver-test-123)$ go get -u github.com/GoogleCloudPlatform/golang-samples/profiler/...

Next, use ls to list the directory and access the sample code for the Stackdriver profiler:

cd ~/gopath/src/github.com/GoogleCloudPlatform/golang-samples/profiler/profiler_quickstart 

The file is called main.go. The program is written to create a CPU-intensive workload to provide data to the profiler. The program is configured to use the Stackdriver profiler, which collects profiling data from the program and saves it periodically. You will see only two messages as the program runs that indicate the progress:

successfully created profile CPU
start uploading profile      

It is important to note that you need to configure your code with the Stackdriver profiler to be able to collect profiling data. You can profile code written in Go, Java, Node.js, and code written outside GCP. Across these platforms, only certain types of profiling are available. The following table illustrates this:

Profile type


















Let’s look into each of these profile types:

  • The CPU time for a function describes how long it took to execute the code for a function. This only includes the CPU processing time and not the CPU wait time.
  • Heap profiling helps you find potential memory usage inefficiencies in your programs.
  • Contention allows you to profile a mutex contention for Go. This allows you to determine the amount of time spent waiting for mutexes and the frequency at which contention occurs.
  • Threads allows you to profile thread usage for Go and captures information on goroutines and Go concurrency mechanisms.
  • Wall or wall-clock time is a measure of the time elapsed between entering and exiting a function.

You can use profiling agents on Linux in the compute engine, Kubernetes engine, and app engine flexible environments. You will need to add code additions, depending on the language you use, to profile those specific profile types.

Code additions are out of scope for this book. For now, just remember that code additions allow your profiler to run and collect data. This data is then analyzed using the profiler interface.

In the cloudshell that you have open, type in the following:

go run main.go  

As mentioned earlier, the program is designed to increase the load on the CPU as it runs, and the Stackdriver profiler collects and saves the data periodically.

Your output will show the following:

You will see the Stackdriver Profiler dashboard change:

If you don’t see any updates, click on NOW on the right side to update it with the latest profiled data:

Notice that over five profiles were updated in my demo:

Here are these updates being logged in the console:

Let’s explore the profiler interface. The interface offers a control area for selecting the data to visualize and a flame graph representation of the selected data. You can use the top controls for selecting a specific time range, so you can examine the data for that time frame.

Let’s see the following options available to us in the profiler dashboard:

Service allows you to switch between different applications that are being profiled. Profile type lets you choose the kind of profile data to display; in this case CPU. Zone names allow you to restrict your data to a particular zone and versions allow you to restrict profiled data from specific versions of your application. The Add  profile data filter allows you to filter out or refine how the graph displays data.

Let’s explore the colorful flame graph:

The top frame (gray colored) represents the entire program. This frame always shows 100% of the resource consumption. Below the top frame, each of these frames represents each function and its size (measured horizontally) shows the proportion of resource consumption that function is responsible for. The main green function is the Go runtime.main function. The orange frame is the main routine of the sample program. The orange busy loop and load frames are routines called from the main function of the program.

If you look closely, the four functions consume almost the same amount of resources. To understand where the resources are allocated, we can use the filter option to hide the call stack from the main routine.

Type in Hide stacks: main, and hit Enter:

This shows all the resources consumed outside our program, and it accounts for 0.211% over the six profiles we processed.

Try deploying different applications with the sample profiler code and continue to explore the behavior of your application with this powerful tool.

Comments are closed.