本文是Go语言系列文章Golang Notes的其中一篇,完整的文章列表请去总章查看。
内存管理是任何编程语言最重要的部分,几乎可以不带之一了。Go语言的机制是和一般高级语言一样的垃圾回收器机制,因此需要用户直接操作的东西比较少,但掌握其知识点仍旧有利于写出性能更好的Go程序,此外也可以监控Go程序的内存使用情况。
下文按以下思路走,先看下Go语言是如何进行内存分配的,然后看下Go语言是如何对分配的内存进行垃圾回收的,最后看下如何对运行的Go程序进行下针对内存的Profiling。
理解Go语言是如何分配内存确实是一件收益不高的事情,不了解也不影响Go编码和程序运行与使用。当然,了解了能更好理解Go语言是如何分配内存的,对某些类型的应用编写有指导作用,此外,也能更好理解GC和Profile。(作为学习也是好的,见多识广,就像这次看完Go语言的内存分配之后,发现思路和很早之前学习memcached的内存分配防碎片的机制是一致的)
Go语言的内存分配最早是基于Tcmalloc的,但后来分化出来做了点定制。毕竟Go语言有其自身的目标,不可能全盘照搬Tcmalloc。有精力的其实可以好好看看Tcmalloc,一样可以帮助理解Go语言内存分配。
// This was originally based on tcmalloc, but has diverged quite a bit.
// http://goog-perftools.sourceforge.net/doc/tcmalloc.html
相关的分析之类的资料在最下面的资料章节里有,有兴趣的可以自行查看。
与Node.js开发组不同,Golang的维护者对于信息的整理和公布做的实在是一般,很少有高质量的官方博客对底层及一系列的系统进行对应的说明(不是长篇wiki就是一些简单的slide+视频)。所以要挖Golang的很多底层设计,就有必要深度阅读Golang的官方源代码。内存这块也是一样,因时间成本的原因,我主要是找了一些网上现成的博文进行阅读。主要有以下两篇,讲解得非常到位,正好国内国外各一篇:
1.9.2版本
2018-06-06
上述两篇博客建议从英语的开始看起,甚至只看英语那篇也是可以的。此外,因版本关系中文那篇提到的512G最大堆内存容量限制已经在1.11版本
中被取消了,具体可以查看:
下面会将内存分配相关的核心要点做摘录,长篇详细的,上面的博客里都有。
已取消限制,见上文的链接。在最后有上限的一个版本里,堆内存的极限是512 GB
。
用来实际分配的内存块,一个Page的容量是8 KB
。
对于手动管理内存的语言,比如 C/C++,我们使用 malloc 或者 new 申请的变量会被分配到堆上。但是 Golang 并不是这样,虽然 Golang 语言里面也有 new。Golang 编译器决定变量应该分配到什么地方时会进行逃逸分析。
package main
import ()
func foo() *int {
var x int
return &x
}
func bar() int {
x := new(int)
*x = 1
return *x
}
func main() {}
$ go run -gcflags '-m -l' escape.go
./escape.go:6: moved to heap: x
./escape.go:7: &x escape to heap
./escape.go:11: bar new(int) does not escape
foo() 中的 x 最后在堆上分配,而 bar() 中的 x 最后分配在了栈上。
资料:
go在程序启动时会分配一块虚拟内存地址是连续的内存, 结构如下:
heap
,go从heap分配的内存都在这个区域中哪些地址保存了对象
, 并且对象中哪些地址包含了指针(为GC服务
)
某一页(Page)属于哪个span
简单来说:
关于Tiny的设计目的:
// The main targets of tiny allocator are small strings and
// standalone escaping variables. On a json benchmark
// the allocator reduces number of allocations by ~12% and
// reduces heap size by ~20%.
如果内存分配器按需求随意从内存中进行对象的创建(占用)和销毁(释放),最后就会产生大量的内存碎片。因此常见的(也是Go使用的)防碎片方法是一开始就按对象的尺寸划分好一个个的内存单元,然后按实际的对象大小,放到对应的内存单元里。可以想象是银行的一整排保险柜,把东西放到保险柜的抽屉里。这样做的好处是可以防止内存碎片的大量发生,坏处是必然会产生内存浪费(想象下刚才举的例子,放进去的东西必然不可能完全占满一个抽屉,整个保险柜里的空间并不是100%被占满)。
这里有一点需要注意,防碎片设计只
针对Small级别
的对象:
Go的内存分配器对span
按大小做了分级,可以查看源代码来获得一个直观的认识。
这里有几个字段:
举个例子:
class bytes/obj bytes/span objects tail waste max waste
...
3 32 8192 256 0 46.88%
4 48 8192 170 32 31.52%
...
P.S
源码中的class id从1到66,但还有一个0号span,这个class的span代表里面存放的是Large级别的对象,也就是超过32 KB尺寸的对象。所有span的class分类一共是67种。
这个title是我自己起的,感觉还算贴切。
Go语言的内存分配,并不是由单一的一个中央分配器进行控制的,而是由多个层级进行管理。最关键的是根据GMP的设计,在P这一层有单独的内存分配及缓存管理器。这么做的好处就是在P这一级,如果有缓存着的spans可以分配,那么这个内存申请行为就是完全无锁的,对性能的损耗就非常小了。关于这个分级缓存的概念,可以查看malloc.go代码头部的注释。
仍旧需要注意,这一套机制只服务于Tiny(其实就是Small)、Small级别的对象:
// mheap: the malloc heap, managed at page (8192-byte) granularity.
// mspan: a run of pages managed by the mheap.
// mcentral: collects all spans of a given size class.
// mcache: a per-P cache of mspans with free space.
根据之前提到过的Tiny、Small、Large三种对象大小分类,在对其申请内存空间的时候的行为也是不一样的。
Tiny对象的分配有其特殊性:
合并
起来,作为一个
单独的对象(可以理解为组合成Small对象),放到class 2分级span中(和Small对象一样的处理)根据官方的注释,存放Tiny对象的span现在默认是class为2的span分级,也就是16 byte一个对象的内存空间。这样做可以让尽量多的Tiny对象组合起来存放到一起,当然会造一定量的内存浪费(有可能)。如果使用8 byte的span,就不会浪费任何内存,但Tiny对象的合并机会就会少。而如果使用32 byte的span,可以有更多的Tiny对象合并机会,但最差情况下的内存浪费也会大很多。
这部分的内容,可以查看malloc.go代码关于Tiny allocator部分的注释:malloc.go
分配行为:
如果合并不进去:
如果mspan没有空闲的插槽(slot):
如果没有mspan:
如果mheap没有足够的空间:
分配行为:
如果mspan没有空闲的插槽(slot):
如果没有mspan:
如果mheap没有足够的空间:
Large对象的分配会直接使用mheap,跳过mcentral和mcache。如果没有足够的空间,则直接向OS申请。
垃圾回收的知识点其实非常简单,软件工程业界在垃圾回收这个话题上其实是有几篇非常重要的算法的,一般的垃圾回收都是按照这些论文和算法进行实施,当然具体的细节处理上,各语言都有各自的应对方法。因此只要理解过一两个语言的垃圾回收机制,基本上问题就不大了,其他都是大同小异。
Node.js这块我曾经写过:
基本上也算是很清晰了。
Go语言GC比较好的分析文章可以阅读:Golang’s Real-time GC in Theory and Practice。需要了解更多GC细节的,推荐继续阅读:Golang源码探索(三) GC的实现原理#回收对象的处理从回收对象的处理
标题开始的后续内容(这篇文章没有标题锚链接,给不出链接,只能打开页面后搜索标题)。
Go语言的GC发展到最近的版本,GC也已经是并行GC实现了,Go的GC相当的目标明确:减少停顿时间
。STW越短,runtime对于应用程序的响应就响应越长,就可以显著缩短程序的延迟,并增加程序的吞吐量。
Go语言的GC在早期的几个版本里做的也并不是非常好,回顾整个发展历史可以让读者对于GC的功能及核心瓶颈要点有一个认识。相关内容可以通过阅读以下几篇来得到认识,这方面基本上看看就好,不用太较真。
理论的东西之后就是比较关键的实践和影响实践的一些关键要点了,接下来可以慢慢过一下。
触发条件有以下几种:
定义可见:mgc.go#gcTriggerHeap
// gcTriggerHeap indicates that a cycle should be started when
// the heap size reaches the trigger heap size computed by the
// controller.
gcTriggerHeap
该触发条件为:mgc.go#test()
// Non-atomic access to heap_live for performance. If
// we are going to trigger on this, this thread just
// atomically wrote heap_live anyway and we'll see our
// own write.
return memstats.heap_live >= memstats.gc_trigger
memstats
在下文Profile里会提到,这里先不展开。
// heap_live is the number of bytes considered live by the GC.
// ...
heap_live uint64
附带提一个mstats.go#heap_marked,后面会用到:
// heap_marked is the number of bytes marked by the previous
// GC. After mark termination, heap_live == heap_marked, but
// ...
heap_marked uint64
// gc_trigger is the heap size that triggers marking.
// ...
gc_trigger uint64
gc_trigger
是一个浮动值,根据runtime的状态时时变化的,其计算如下:mgc.go#endCycle()
// 常量,用来调整后续的触发偏移值
const triggerGain = 0.5
// 目标增长率,gcpercent来自于环境变量GOGC
// GOGC是一个1-100的数字,如果没有这个环境变量的话,默认是100,即目标增长率为1
// 如果GOGC是off,则值为-1
goalGrowthRatio := float64(gcpercent) / 100
// 实际增长率,(当前存活的量 / 上一次GC标记存活的量)- 1
actualGrowthRatio := float64(memstats.heap_live)/float64(memstats.heap_marked) - 1
// GC标记阶段的耗时(因为endCycle是在Mark Termination阶段调用的)
assistDuration := nanotime() - c.markStartTime
// GC标记阶段的CPU占用率, 目标值gcBackgroundUtilization是一个常量0.25
utilization := gcBackgroundUtilization
// assistTime是G辅助GC标记对象所使用的时间合计 (nanosecnds spent in mutator assists during this cycle)
if assistDuration > 0 {
// 额外的CPU占用率 = 辅助GC标记对象的总时间 / (GC标记使用时间 * P的数量)
utilization += float64(c.assistTime) / float64(assistDuration*int64(gomaxprocs))
}
// 触发系数偏移值 = 目标增长率 - 原触发系数 - CPU占用率 / 目标CPU占用率 * (实际增长率 - 原触发系数)
// 参数的分析:
// 实际增长率越大, 触发系数偏移值越小, 小于0时下次触发GC会提早
// CPU占用率越大, 触发系数偏移值越小, 小于0时下次触发GC会提早
// 原触发系数越大, 触发系数偏移值越小, 小于0时下次触发GC会提早
triggerError := goalGrowthRatio - memstats.triggerRatio - utilization/gcGoalUtilization*(actualGrowthRatio-memstats.triggerRatio)
// 根据偏移值调整触发系数, 每次只调整偏移值的一半(渐进式调整)
triggerRatio := memstats.triggerRatio + triggerGain*triggerError
GOGC
公式中的目标增长率
源于环境变量GOGC
,取值范围1-100,如果没有设置的话默认为100,增加它的值可以减少GC的触发。设置GOGC=off
可以关闭GC。
The GOGC variable sets the initial garbage collection target percentage. A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. The default is GOGC=100. Setting GOGC=off disables the garbage collector entirely. The runtime/debug package’s SetGCPercent function allows changing this percentage at run time. See https://golang.org/pkg/runtime/debug/#SetGCPercent.
定义可见:mgc.go#gcTriggerTime
// gcTriggerTime indicates that a cycle should be started when
// it's been more than forcegcperiod nanoseconds since the
// previous GC cycle.
gcTriggerTime
该触发条件为:mgc.go#test()
lastgc := int64(atomic.Load64(&memstats.last_gc_nanotime))
// var forcegcperiod int64 = 2 * 60 * 1e9,即两分钟
return lastgc != 0 && t.now-lastgc > forcegcperiod
如果2分钟内没有执行过GC就会强制触发。
手动调用runtime.GC()
进行触发:mgc.go#GC()。
其他GC相关的细节及概念可以参见之前提到的中文博客,这里就不展开了。基本上GC的逻辑和里面的概念都大同小异。
Go语言官方提供的Profile相关工具相当的丰富,能从各种角度和层面满足开发者的需求,因此在我们整理这块知识点的时候,也需要横向全部了解下,方便后续根据不同的应用场景选用对应的技术。
可选用的Profile技术/工具有:
工具 | 是否实时 | 应用场景 | 是否侵入 | 时效 | 说明 |
---|---|---|---|---|---|
ReadMemStats | 实时 | 指标监控 | 是 | 时间点 | 调用接口,将内存相关统计信息读入到一个对象内,可自由使用该对象内的统计数据 |
pprof | 离线 | 事后分析 | 是 | 时间点 | - 在代码中调用接口输出内存profile - 输出的数据可以通过专门的工具事后分析 |
pprof web | 实时 | 在线调试 | 是 | 时间点 | 在代码中嵌入pprof的HTTP相关代码,然后可以: - 在浏览器中的对应endpoint获得到对应的profile信息 - 在专门的工具中指向HTTP的endpoint,获得实时的分析结果 |
trace | 离线 | 事后分析 | 是 | 时间段 | - 在代码中开启trace以及关闭trace,输出数据 - 输出的数据可以通过专门工具事后分析查看 |
GODEBUG | 实时 | 在线调试 | 否 | 时间点 | - 对应用程序无侵入,在启动时带上全局变量即可 - 信息会实时输出在stderr |
备注:
P.S
几篇推荐:
在使用接口runtime.ReadMemStats
获得对应的内存统计状态的时候,我们会拿到对应的数据对象MemStats
,对于这个数据对象的理解是必要的。
在源码中,对于内存相关的数据统计有两个struct。
// Statistics.
// If you edit this structure, also edit type MemStats below.
// Their layouts must match exactly.
//
// For detailed descriptions see the documentation for MemStats.
// Fields that differ from MemStats are further documented here.
//
// Many of these fields are updated on the fly, while others are only
// updated when updatememstats is called.
type mstats struct {
...
}
// A MemStats records statistics about the memory allocator.
type MemStats struct {
...
}
关于memstats
与MemStats
的区别,第一次读代码的人肯定会有困惑:
heap_live
、heap_marked
)只要理解了Go语言的命名规范就不会迷惑了,大写的是对外暴露的,小写的是不对外暴露的,而且获取统计数据的接口runtime.ReadMemStats
给出的数据类型是MemStats
。因此,MemStats
是提供外部使用者观测统计数据的类型,而memstats
则是在Go内部运行使用的类型。一般来说,作为开发者,只需要阅读MemStats
的注释,理解这个对象即可。
这个结构体会在下面做一个简单翻译,需要完整注释及理解的,请查阅源码:go/src/runtime/mstats.go#[email protected]。
翻译内容因为过长,放在下面的资料部分里。
行文中就列出比较常用的Metrics,方便阅览:
大部分的数据同:MemStats,不同的是// Statistics below here are not exported to MemStats directly.
以下的部分没有暴露出来:
// Statistics below here are not exported to MemStats directly.
...
有需要可以直接阅读go/src/runtime/mstats.go#[email protected]中的字段和注释。
如果仅只需要获取GC相关的Metrics,则可以调用接口:debug.ReadGCStats
(源码),获得到的结构是:go/src/runtime/debug/garbage.go#[email protected]。
其相关Metrics在MemStats
里都有,这里就不啰嗦了,直接看上面就好。
测试相关的用例和输出分析都来自:Profiling Go
测试用例:
package main
import (
"log"
"os"
)
// bigBytes allocates 10 sets of 100 megabytes
func bigBytes() *[]byte {
s := make([]byte, 100000000)
return &s
}
func main() {
for i := 0; i < 10; i++ {
s := bigBytes()
if s == nil {
log.Println("oh noes")
}
}
}
**注意:**在1.9之前的版本中调用runtime.ReadMemStats
读取内存分析会导致比较长的STW,使用代价会比较高,在1.9及之后的版本中使用就非常安全了,STW的耗时仅< 25µs
。见:
特性:
使用:
特性:
使用:
调用接口pprof.WriteHeapProfile(os.Stdout)
获取一个时间点上的内存状态。
编译执行:
$ go build -o app && time ./app > memory.profile
分析结果:
$ go tool pprof memory.profile
分析结果总览:
$ (pprof) top
Showing nodes accounting for 95.38MB, 100% of 95.38MB total
flat flat% sum% cum cum%
95.38MB 100% 100% 95.38MB 100% main.bigBytes /...ain.go (inline)
0 0% 100% 95.38MB 100% main.main /.../profiling/main.go
0 0% 100% 95.38MB 100% runtime.main /.../runtime/proc.go
观察函数细节:
$ (pprof) list main.
Total: 95.38MB
ROUTINE ======================== main.bigBytes in /.../go/profiling/main.go
95.38MB 95.38MB (flat, cum) 100% of Total
. . 6: "runtime/pprof"
. . 7:)
. . 8:
. . 9:// bigBytes allocates 10 sets of 100 megabytes
. . 10:func bigBytes() *[]byte {
95.38MB 95.38MB 11: s := make([]byte, 100000000)
. . 12: return &s
. . 13:}
. . 14:
. . 15:func main() {
. . 16: for i := 0; i < 10; i++ {
ROUTINE ======================== main.main in /.../code/go/profiling/main.go
0 95.38MB (flat, cum) 100% of Total
. . 12: return &s
. . 13:}
. . 14:
. . 15:func main() {
. . 16: for i := 0; i < 10; i++ {
. 95.38MB 17: s := bigBytes()
. . 18: if s == nil {
. . 19: log.Println("oh noes")
. . 20: }
. . 21: }
. . 22:
特性:
使用:
引入pprof的http包,并启动web服务器:
import _ "net/http/pprof"
func main() {
// ...
err := http.ListenAndServe("0.0.0.0:6060", nil)
if err != nil {
fmt.Println("Error:", err)
}
// ...
}
编译运行之后,可以到http://127.0.0.1:6060/debug/pprof/
查看输出情况。内存相关在:http://127.0.0.1:8080/debug/pprof/heap?debug=1
。
如果有已经在使用的URL router,需要自定义输出endpoint:
package main
import (
"net/http"
"net/http/pprof"
)
func message(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Hello World"))
}
func main() {
r := http.NewServeMux()
r.HandleFunc("/", message)
r.HandleFunc("/debug/pprof/", pprof.Index)
r.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
r.HandleFunc("/debug/pprof/profile", pprof.Profile)
r.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
r.HandleFunc("/debug/pprof/trace", pprof.Trace)
http.ListenAndServe(":8080", r)
}
细节分析仍旧使用pprof的命令行工具,输入部分改为URL:
$ go tool pprof http://localhost:6060/debug/pprof/heap
Fetching profile over HTTP from http://localhost:6060/debug/pprof/heap
Saved profile in /Users/XXX/pprof/pprof.alloc_objects.alloc_space.inuse_objects.inuse_space.009.pb.gz
Type: inuse_space
Time: Mar 23, 2019 at 3:13pm (CST)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof)
本质上,pprof工具其实是从指定的URL下载一份当时的heap信息,然后开始交互分析:
Saved profile in /Users/XXX/pprof/pprof.alloc_objects.alloc_space.inuse_objects.inuse_space.009.pb.gz
进入pprof的交互命令行之后就是常规操作了,这里不再赘述。
Trace的使用和前几者有显著不同,trace追踪的是一个时间段内程序的行为,且其API并没有对内存等进行区分,也就是说trace函数执行期间,其对Go应用程序进行了全方面的信息采集,包括了CPU、内存、Gorountine等等。
一般来说Trace的应用场景是打开思路
:“我感觉应用程序有点问题,但具体问题在哪里暂时还不知道,要不就先trace一下,把所有数据都导出来,然后工具里看看随着时间的应用程序指标变化,然后找找问题”。大致上是这么一个方向。
实际上这种需求的可能性会很小,一般来说,应用程序的监控会使用API或者聚合工具将Metrics输出并采集,随着时间会进行对应的指标检查。所以一般出问题了,肯定知道问题在哪个领域内,后面如果真的要追细节,那肯定也是用go tool pprof
而不是trace。所以其实trace的存在意义并不大。
特性:
使用:
trace.Start(os.Stdout)
defer trace.Stop()
编译、运行、输出trace文件:
$ go build -o app
$ time ./app > app.trace
$ go tool trace app.trace
如果你想输出一份pprof也能使用的兼容文件,你可以使用 -pprof 选项。使用手册仍旧是命令行help:go tool trace [email protected]。
解析trace文件:
$ go tool trace app.trace
2017/10/29 09:30:40 Parsing trace...
2017/10/29 09:30:40 Serializing trace...
2017/10/29 09:30:40 Splitting trace...
2017/10/29 09:30:40 Opening browser
浏览器会自动打开:http://127.0.0.1:60331
。后续的查看细节这里就不展开了,有兴趣的可以阅读:Profiling Go > Trace。
此外,官方的trace包也可以稍微看下:Package trace。
在Go程序运行的时候设置全局变量GODEBUG=gctrace=1
,则可以开始gc状态输出,每次gc之后都会向stderr输出信息。比较常用在初步定位GC问题的时候,或怀疑程序的GC是否消耗了过多的CPU资源等情况。
文档在官方runtime包:Environment Variables#gctrace。
gctrace: setting gctrace=1 causes the garbage collector to emit a single line to standard
error at each collection, summarizing the amount of memory collected and the
length of the pause. The format of this line is subject to change.
Currently, it is:
gc # @#s #%: #+#+# ms clock, #+#/#/#+# ms cpu, #->#-># MB, # MB goal, # P
where the fields are as follows:
gc # the GC number, incremented at each GC
@#s time in seconds since program start
#% percentage of time spent in GC since program start
#+...+# wall-clock/CPU times for the phases of the GC
#->#-># MB heap size at GC start, at GC end, and live heap
# MB goal goal heap size
# P number of processors used
The phases are stop-the-world (STW) sweep termination, concurrent
mark and scan, and STW mark termination. The CPU times
for mark/scan are broken down in to assist time (GC performed in
line with allocation), background GC time, and idle GC time.
If the line ends with "(forced)", this GC was forced by a
runtime.GC() call.
Setting gctrace to any value > 0 also causes the garbage collector
to emit a summary when memory is released back to the system.
This process of returning memory to the system is called scavenging.
The format of this summary is subject to change.
Currently it is:
scvg#: # MB released printed only if non-zero
scvg#: inuse: # idle: # sys: # released: # consumed: # (MB)
where the fields are as follows:
scvg# the scavenge cycle number, incremented at each scavenge
inuse: # MB used or partially used spans
idle: # MB spans pending scavenging
sys: # MB mapped from the system
released: # MB released to the system
consumed: # MB allocated from the system
举个例子来说:
gc 31 @1656.509s 0%: 0.010+0.26+0.006 ms clock, 0.085+0/0.33/0.64+0.048 ms cpu, 49->49->18 MB, 50 MB goal, 8 P
gc 31
:第31次GC@1656.509s
:程序启动至今1656.509秒0%:
:消耗在GC上的CPU为0%0.010+0.26+0.006 ms clock,
:STW清扫、并发标记扫描、STW标记耗时,单位为毫秒(ms)0.085+0/0.33/0.64+0.048 ms cpu,
:辅助GC、后台GC、闲置GC耗时,单位为毫秒(ms)49->49->18 MB,
:GC启动前,完成后,以及存活的堆内存大小50 MB goal,
:下一次GC启动的目标堆内存大小8 P
:该应用程序使用的处理器数量scvg52: inuse: 19, idle: 43, sys: 63, released: 4, consumed: 59 (MB)
scvg52:
:第52次scavenge循环inuse: 19,
:使用中或部分使用中的spans占用的内存(MB)idle: 43,
:处于空闲状态,等待回收的spans占用的内存(MB)sys: 63,
:从操作系统申请的内存(MB)released: 4
:返还给操作系统的内存(MB)consumed: 59 (MB)
:从操作系统申请且已经分配被占用的内存(MB)完整的输出例子可以查看:GODEBUG=gctrace=1 sample
pprof使用分两部分:
go tool pprof ...
的方式直接将分析结果输出(pprof) ...
一般来说,如果只是对一个比较小的程序做简单的分析,可以选择第一种,直接将所有的关系都输出成图、PDF之类,直接阅读。但对于比较复杂的程序,就更推荐第二种方式,进入交互界面,根据自己的需求,进行细致的分析。
这里要提下两者的帮助信息如何取得,众所周知,Go的文档实在是不怎么样。所以这两者的使用,基本上都靠命令+help打印出来的内容进行指导。
go tool pprof
可以不带任何参数进行直接执行,就会打印出使用手册:go tool pprof [email protected](pprof) help
可以打印出交互命令的使用手册:(pprof) [email protected]所谓的交互命令,是指在命令行中执行go tool pprof ...
之后,进入了一个以(pprof) ...
开头的命令行。在这里可以执行一系列pprof的子命令,进行对应的内存和CPU分析。
常用的有:
list的help信息如下:
(pprof) help list
Output annotated source for functions matching regexp
Usage:
list<func_regex|address> [-focus_regex]* [-ignore_regex]*
Include functions matching func_regex, or including the address specified.
Include samples matching focus_regex, and exclude ignore_regex.
完整的命令可以查看:(pprof) [email protected]。
在使用go tool pprof ...
进行对应的分析交互的时候,默认的内存观察视角是:inuse_space
,也就是在使用中(当前占用)的内存空间的视角。但某些时候也需要以其他视角进行观察分析,就需要:
go tool pprof -alloc_space ...
(pprof) sample_index = alloc_space
可用的视角选项如下:
Legacy convenience options:
-inuse_space Same as -sample_index=inuse_space
-inuse_objects Same as -sample_index=inuse_objects
-alloc_space Same as -sample_index=alloc_space
-alloc_objects Same as -sample_index=alloc_objects
-total_delay Same as -sample_index=delay
-contentions Same as -sample_index=contentions
-mean_delay Same as -mean -sample_index=delay
sample_index命令帮助:
(pprof) help sample_index
Sample value to report (0-based index or name)
Profiles contain multiple values per sample.
Use sample_index=i to select the ith value (starting at 0).
Or use sample_index=name, with name in [alloc_objects alloc_space inuse_objects inuse_space].
完整的列表,及其他的选项,可以查看:go tool pprof [email protected]以及(pprof) [email protected]。
根据上面所述,如果你指定的视角当前没有任何可分析的东西,pprof会对此进行提示:No samples were found with the default sample value type.
$ go tool pprof http://127.0.0.1:8080/debug/pprof/heap
Fetching profile over HTTP from http://127.0.0.1:8080/debug/pprof/heap
Saved profile in /Users/XXX/pprof/pprof.alloc_objects.alloc_space.inuse_objects.inuse_space.005.pb.gz
Type: inuse_space
Time: Mar 22, 2019 at 4:43pm (CST)
No samples were found with the default sample value type.
Try "sample_index" command to analyze different sample values.
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) sample_index = alloc_space
(pprof) top5
Showing nodes accounting for 6.35GB, 100% of 6.35GB total
flat flat% sum% cum cum%
6.35GB 100% 100% 6.35GB 100% main.channelFactory.func1
资料:
使用如下命令将一个profile文件或URL的分析结果导出成一个可打开的静态资源:
go tool pprof -png http://localhost:6060/debug/pprof/heap > data.png
可选的类型有:-png
、-gif
、-svg
,等等,甚至是-pdf
。所有的可选项可以查看:go tool pprof [email protected]。
默认输出的是inuse_space
的分析,可以进行调整,参见:sample_index。
图中的格子越大,表示其消耗的内存越多。
从Go 1.10
开始,pprof也提供了方法将分析结果展示到WEB UI中,方便交互和查看。(pprof) ...
的交互命令行需要熟悉命令,静态资源
则比较难以处理复杂的程序。WEB UI在交互这方面应该是最佳选项了。
使用:
pprof -http=:8899 http://localhost:8080/debug/pprof/heap
Fetching profile over HTTP from http://localhost:8080/debug/pprof/heap
Saved profile in /Users/XXX/pprof/pprof.alloc_objects.alloc_space.inuse_objects.inuse_space.008.pb.gz
这样就开启并保持了一个WEB进程,只需要在浏览器中打开:http://localhost:8899/ui/
,就可以进行交互和观察了。
资料:
pprof输出、保存下来的profile文件是一个protobuf二进制文件,其结构描述在:pprof/proto/profile.proto。有兴趣的可以查阅下,可以更深入理解profile里保存了什么。
代码放在github:experiment/memory/memory.go。github上的代码可以看到最新的更新,不过需要打开新的页面稍微麻烦点。
当前页面会放一份gist,方便直接查看:experiment/memory/memory.go,但可能更新不及时。
使用:
-stats=true
:运行内存泄露脚本,在命令行输出runtime.MemStats
的信息GODEBUG=gctrace=1
go run memory.go 参数…:运行内存泄露脚本,在命令行输出gc统计信息参数说明:
FIXED_SPAN
下生效runtime.MemStats
的信息// A MemStats records statistics about the memory allocator.
type MemStats struct {
// 常规统计。
// Alloc 是已分配的堆内存对象占用的内存量(bytes)。
//
// 这个值和 HeapAlloc 一致(看下面)。
Alloc uint64
// TotalAlloc 是累积的堆内存对象分配的内存量(bytes)。
//
// TotalAlloc 会随着堆内存对象分配慢慢增长,但不像 Alloc 和 HeapAlloc,
// 这个值不会随着对象被释放而缩小。
TotalAlloc uint64
// Sys 是从 OS 获得的内存总量(bytes)。
//
// Sys 是下面列出的 XSys 字段的综合。Sys 维护着为 Go 运行时预留的虚拟内存空间地址,
// 里面包含了:堆、栈,以及其他内部数据结构。
Sys uint64
// Lookups 是 runtime 执行的指针查询的数量。
//
// 这主要在针对 runtime 内部进行 debugging 的时候比较有用。
Lookups uint64
// Mallocs 是累积被分配的堆内存对象数量。
// 存活堆内存对象数量是 Mallocs - Frees。
Mallocs uint64
// Frees 是累积被释放掉的堆内存对象数量。
Frees uint64
// 堆内存统计。
//
// 理解堆内存统计需要一些 Go 是如何管理内存的知识。Go 将堆内存虚拟内存空间以 "spans" 为单位进行分割。
// spans 是 8K(或更大)的连续内存空间。一个 span 可能会在以下三种状态之一:
//
// 一个 "空闲 idle" 的 span 内部不含任何对象或其他数据。
// 占用物理内存空间的空闲状态 span 可以被释放回 OS(但虚拟内存空间不会),
// 或者也可以被转化成为 "使用中 in use" 或 "堆栈 stack" 状态。
//
// 一个 "使用中 in use" span 包含了至少一个堆内存对象且可能还有富余的空间可以分配更多的堆内存对象。
//
// 一个 "堆栈 stack" span 是被用作 goroutine stack 的 内存空间。
// 堆栈状态的 span 不被视作是堆内存的一部分。一个 span 可以在堆内存和栈内存之间切换;
// 但不可能同时作为两者。
// HeapAlloc 是已分配的堆内存对象占用的内存量(bytes)。
//
// "已分配"的堆内存对象包含了所有可达的对象,以及所有垃圾回收器已知但仍未回收的不可达对象。
// 确切的说,HeapAlloc 随着堆内存对象分配而增长,并随着内存清理、不可达对象的释放而缩小。
// 清理会随着 GC 循环渐进发生,所有增长和缩小这两个情况是同时存在的,
// 作为结果 HeapAlloc 的变动趋势是平滑的(与传统的 stop-the-world 型垃圾回收器的锯齿状趋势成对比)。
HeapAlloc uint64
// HeapSys 是堆内存从 OS 获得的内存总量(bytes)。
//
// HeapSys 维护着为堆内存而保留的虚拟内存空间。这包括被保留但尚未使用的虚拟内存空间,
// 这部分是不占用实际物理内存的,但趋向于缩小,
// 和那些占用物理内存但后续因不再使用而释放回 OS 的虚拟内存空间一样。(查看 HeapReleased 作为校对)
//
// HeapSys 用来评估堆内存曾经到过的最大尺寸。
HeapSys uint64
// HeapIdle 是处于"空闲状态(未使用)"的 spans 占用的内存总量(bytes)。
//
// 空闲状态的 spans 内部不含对象。这些 spans 可以(并可能已经被)释放回 OS,
// 或者它们可以在堆内存分配中重新被利用起来,或者也可以被重新作为栈内存利用起来。
//
// HeapIdle 减去 HeapReleased 用来评估可以被释放回 OS 的内存总量,
// 但因为这些内存已经被 runtime 占用了(已经从 OS 申请下来了)所以堆内存可以重新使用这些内存,
// 就不用再向 OS 申请更多内存了。如果这个差值显著大于堆内存尺寸,这意味着近期堆内存存活对象数量存在一个短时峰值。
HeapIdle uint64
// HeapInuse 是处于"使用中"状态的 spans 占用的内存总量(bytes)。
//
// 使用中的 spans 内部存在至少一个对象。这些 spans 仅可以被用来存储其他尺寸接近的对象。
//
// HeapInuse 减去 HeapAlloc 用来评估被用来存储特定尺寸对象的内存空间的总量,
// 但目前并没有被使用。这是内存碎片的上界,但通常来说这些内存会被高效重用。
HeapInuse uint64
// HeapReleased 是被释放回 OS 的物理内存总量(bytes)。
//
// 这个值计算为已经被释放回 OS 的空闲状态的 spans 堆内存空间,且尚未重新被堆内存分配。
HeapReleased uint64
// HeapObjects 是堆内存中的对象总量。
//
// 和 HeapAlloc 一样,这个值随着对象分配而上涨,随着堆内存清理不可达对象而缩小。
HeapObjects uint64
// 栈内存统计。
//
// 栈内存不被认为是堆内存的一部分,但 runtime 会将一个堆内存中的 span 用作为栈内存,反之亦然。
// StackInuse 是栈内存使用的 spans 占用的内存总量(bytes)。
//
// 使用中状态的栈内存 spans 其中至少有一个栈内存。这些 spans 只能被用来存储其他尺寸接近的栈内存。
//
// 并不存在 StackIdle,因为未使用的栈内存 spans 会被释放回堆内存(因此被计入 HeapIdle)。
StackInuse uint64
// StackSys 是栈内存从 OS 获得的内存总量(bytes)。
//
// StackSys 是 StackInuse 加上一些为了 OS 线程栈而直接从 OS 获取的内存(应该很小)。
StackSys uint64
// 堆外(off-heap)内存统计。
//
// 下列的统计信息描述了并不会从堆内存进行分配的运行时内部(runtime-internal)结构体(通常因为它们是堆内存实现的一部分)。
// 不像堆内存或栈内存,任何这些结构体的内存分配仅只是为这些结构服务。
//
// 这些统计信息对 debugging runtime 内存额外开销非常有用。
// MSpanInuse 是 mspan 结构体分配的内存量(bytes)。
MSpanInuse uint64
// MSpanSys 是为 mspan 结构体从 OS 申请过来的内存量(bytes)。
MSpanSys uint64
// MCacheInuse 是 mcache 结构体分配的内存量(bytes)。
MCacheInuse uint64
// MCacheSys 是为 mcache 结构体从 OS 申请过来的内存量(bytes)。
MCacheSys uint64
// BuckHashSys 是用来 profiling bucket hash tables 的内存量(bytes)。
BuckHashSys uint64
// GCSys 是在垃圾回收中使用的 metadata 的内存量(bytes)。
GCSys uint64
// OtherSys 是各种各样的 runtime 分配的堆外内存量(bytes)。
OtherSys uint64
// 垃圾回收统计。
// NextGC 是下一次 GC 循环的目标堆内存尺寸。
//
// 垃圾回收器的目标是保持 HeapAlloc ≤ NextGC。
// 在每一轮 GC 循环末尾,下一次循环的目标值会基于当前可达对象数据量以及 GOGC 的值来进行计算。
NextGC uint64
// LastGC 是上一次垃圾回收完成的时间,其值为自 1970 年纸巾的 nanoseconds(UNIX epoch)。
LastGC uint64
// PauseTotalNs 是自程序启动开始,在 GC stop-the-world 中暂停的累积时长,以 nanoseconds 计数。
//
// 在一次 stop-the-world 暂停期间,所有的 goroutines 都会被暂停,仅垃圾回收器在运行。
PauseTotalNs uint64
// PauseNs 是最近的 GC stop-the-world 暂停耗时的环形缓冲区(以 nanoseconds 计数)。
//
// 最近一次的暂停耗时在 PauseNs[(NumGC+255)%256] 这个位置。
// 通常来说,PauseNs[N%256] 记录着最近第 N%256th 次 GC 循环的暂停耗时。
// 在每次 GC 循环中可能会有多次暂停;这是在一次循环中的所有暂停时长的总合。
PauseNs [256]uint64
// PauseEnd 是最近的 GC 暂停结束时间的环形缓冲区,其值为自 1970 年纸巾的 nanoseconds(UNIX epoch)。
//
// 这个缓冲区的填充方式和 PauseNs 是一致的。
// 每次 GC 循环可能有多次暂停;这个缓冲区记录的是每个循环的最后一次暂停的结束时间。
PauseEnd [256]uint64
// NumGC 是完成过的 GC 循环的数量。
NumGC uint32
// NumForcedGC 是应用程序经由调用 GC 函数来强制发起的 GC 循环的数量。
NumForcedGC uint32
// GCCPUFraction 是自程序启动以来,应用程序的可用 CPU 时间被 GC 消耗的时长部分。
//
// GCCPUFraction 是一个 0 和 1 之间的数字,0 代表 GC 并没有消耗该应用程序的任何 CPU。
// 一个应用程序的可用 CPU 时间定义为:自应用程序启动以来 GOMAXPROCS 的积分。
// 举例来说,如果 GOMAXPROCS 是 2 且应用程序已经运行了 10 秒,那么"可用 CPU 时长"就是 20 秒。
// GCCPUFraction 并未包含写屏障行为消耗的 CPU 时长。
//
// 该值和经由 GODEBUG=gctrace=1 报告出来的 CPU 时长是一致的。
GCCPUFraction float64
// EnableGC 显示 GC 是否被启用了。该值永远为真,即便 GOGC=off 被启用。
EnableGC bool
// DebugGC 目前并未被使用。
DebugGC bool
// BySize 汇报了按大小划分的 span 级别内存分配统计信息。
//
// BySize[N] 给出了尺寸 S 对象的内存分配统计信息,尺寸大小是:
// BySize[N-1].Size < S ≤ BySize[N].Size。
//
// 这个结构里的数据并未汇报尺寸大于 BySize[60].Size 的内存分配数据。
BySize [61]struct {
// Size 是当前尺寸级别可容纳的最大对象的 byte 大小。
Size uint32
// Mallocs 是分配到这个尺寸级别的堆内存对象的累积数量。
// 累积分配的内存容量(bytes)可用:Size*Mallocs 进行计算。
// 当前尺寸级别内存活的对象数量可以用 Mallocs - Frees 进行计算。
Mallocs uint64
// Frees 是当前尺寸级别累积释放的堆内存对象的数量。
Frees uint64
}
}
// Copyright 2014 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Memory allocator.
//
// This was originally based on tcmalloc, but has diverged quite a bit.
// http://goog-perftools.sourceforge.net/doc/tcmalloc.html
// The main allocator works in runs of pages.
// Small allocation sizes (up to and including 32 kB) are
// rounded to one of about 70 size classes, each of which
// has its own free set of objects of exactly that size.
// Any free page of memory can be split into a set of objects
// of one size class, which are then managed using a free bitmap.
//
// The allocator's data structures are:
//
// fixalloc: a free-list allocator for fixed-size off-heap objects,
// used to manage storage used by the allocator.
// mheap: the malloc heap, managed at page (8192-byte) granularity.
// mspan: a run of pages managed by the mheap.
// mcentral: collects all spans of a given size class.
// mcache: a per-P cache of mspans with free space.
// mstats: allocation statistics.
//
// Allocating a small object proceeds up a hierarchy of caches:
//
// 1. Round the size up to one of the small size classes
// and look in the corresponding mspan in this P's mcache.
// Scan the mspan's free bitmap to find a free slot.
// If there is a free slot, allocate it.
// This can all be done without acquiring a lock.
//
// 2. If the mspan has no free slots, obtain a new mspan
// from the mcentral's list of mspans of the required size
// class that have free space.
// Obtaining a whole span amortizes the cost of locking
// the mcentral.
//
// 3. If the mcentral's mspan list is empty, obtain a run
// of pages from the mheap to use for the mspan.
//
// 4. If the mheap is empty or has no page runs large enough,
// allocate a new group of pages (at least 1MB) from the
// operating system. Allocating a large run of pages
// amortizes the cost of talking to the operating system.
//
// Sweeping an mspan and freeing objects on it proceeds up a similar
// hierarchy:
//
// 1. If the mspan is being swept in response to allocation, it
// is returned to the mcache to satisfy the allocation.
//
// 2. Otherwise, if the mspan still has allocated objects in it,
// it is placed on the mcentral free list for the mspan's size
// class.
//
// 3. Otherwise, if all objects in the mspan are free, the mspan
// is now "idle", so it is returned to the mheap and no longer
// has a size class.
// This may coalesce it with adjacent idle mspans.
//
// 4. If an mspan remains idle for long enough, return its pages
// to the operating system.
//
// Allocating and freeing a large object uses the mheap
// directly, bypassing the mcache and mcentral.
//
// Free object slots in an mspan are zeroed only if mspan.needzero is
// false. If needzero is true, objects are zeroed as they are
// allocated. There are various benefits to delaying zeroing this way:
//
// 1. Stack frame allocation can avoid zeroing altogether.
//
// 2. It exhibits better temporal locality, since the program is
// probably about to write to the memory.
//
// 3. We don't zero pages that never get reused.
// Virtual memory layout
//
// The heap consists of a set of arenas, which are 64MB on 64-bit and
// 4MB on 32-bit (heapArenaBytes). Each arena's start address is also
// aligned to the arena size.
//
// Each arena has an associated heapArena object that stores the
// metadata for that arena: the heap bitmap for all words in the arena
// and the span map for all pages in the arena. heapArena objects are
// themselves allocated off-heap.
//
// Since arenas are aligned, the address space can be viewed as a
// series of arena frames. The arena map (mheap_.arenas) maps from
// arena frame number to *heapArena, or nil for parts of the address
// space not backed by the Go heap. The arena map is structured as a
// two-level array consisting of a "L1" arena map and many "L2" arena
// maps; however, since arenas are large, on many architectures, the
// arena map consists of a single, large L2 map.
//
// The arena map covers the entire possible address space, allowing
// the Go heap to use any part of the address space. The allocator
// attempts to keep arenas contiguous so that large spans (and hence
// large objects) can cross arenas.
$ go tool pprof
usage:
Produce output in the specified format.
pprof <format> [options] [binary] <source> ...
Omit the format to get an interactive shell whose commands can be used
to generate various views of a profile
pprof [options] [binary] <source> ...
Omit the format and provide the "-http" flag to get an interactive web
interface at the specified host:port that can be used to navigate through
various views of a profile.
pprof -http [host]:[port] [options] [binary] <source> ...
Details:
Output formats (select at most one):
-callgrind Outputs a graph in callgrind format
-comments Output all profile comments
-disasm Output assembly listings annotated with samples
-dot Outputs a graph in DOT format
-eog Visualize graph through eog
-evince Visualize graph through evince
-gif Outputs a graph image in GIF format
-gv Visualize graph through gv
-kcachegrind Visualize report in KCachegrind
-list Output annotated source for functions matching regexp
-pdf Outputs a graph in PDF format
-peek Output callers/callees of functions matching regexp
-png Outputs a graph image in PNG format
-proto Outputs the profile in compressed protobuf format
-ps Outputs a graph in PS format
-raw Outputs a text representation of the raw profile
-svg Outputs a graph in SVG format
-tags Outputs all tags in the profile
-text Outputs top entries in text form
-top Outputs top entries in text form
-topproto Outputs top entries in compressed protobuf format
-traces Outputs all profile samples in text form
-tree Outputs a text rendering of call graph
-web Visualize graph through web browser
-weblist Display annotated source in a web browser
Options:
-call_tree Create a context-sensitive call tree
-compact_labels Show minimal headers
-divide_by Ratio to divide all samples before visualization
-drop_negative Ignore negative differences
-edgefraction Hide edges below <f>*total
-focus Restricts to samples going through a node matching regexp
-hide Skips nodes matching regexp
-ignore Skips paths going through any nodes matching regexp
-mean Average sample value over first value (count)
-nodecount Max number of nodes to show
-nodefraction Hide nodes below <f>*total
-noinlines Ignore inlines.
-normalize Scales profile based on the base profile.
-output Output filename for file-based outputs
-prune_from Drops any functions below the matched frame.
-relative_percentages Show percentages relative to focused subgraph
-sample_index Sample value to report (0-based index or name)
-show Only show nodes matching regexp
-show_from Drops functions above the highest matched frame.
-source_path Search path for source files
-tagfocus Restricts to samples with tags in range or matched by regexp
-taghide Skip tags matching this regexp
-tagignore Discard samples with tags in range or matched by regexp
-tagshow Only consider tags matching this regexp
-trim Honor nodefraction/edgefraction/nodecount defaults
-trim_path Path to trim from source paths before search
-unit Measurement units to display
Option groups (only set one per group):
cumulative
-cum Sort entries based on cumulative weight
-flat Sort entries based on own weight
granularity
-addresses Aggregate at the address level.
-filefunctions Aggregate at the function level.
-files Aggregate at the file level.
-functions Aggregate at the function level.
-lines Aggregate at the source code line level.
Source options:
-seconds Duration for time-based profile collection
-timeout Timeout in seconds for profile collection
-buildid Override build id for main binary
-add_comment Free-form annotation to add to the profile
Displayed on some reports or with pprof -comments
-diff_base source Source of base profile for comparison
-base source Source of base profile for profile subtraction
profile.pb.gz Profile in compressed protobuf format
legacy_profile Profile in legacy pprof format
http://host/profile URL for profile handler to retrieve
-symbolize= Controls source of symbol information
none Do not attempt symbolization
local Examine only local binaries
fastlocal Only get function names from local binaries
remote Do not examine local binaries
force Force re-symbolization
Binary Local path or build id of binary for symbolization
-tls_cert TLS client certificate file for fetching profile and symbols
-tls_key TLS private key file for fetching profile and symbols
-tls_ca TLS CA certs file for fetching profile and symbols
Misc options:
-http Provide web based interface at host:port.
Host is optional and 'localhost' by default.
Port is optional and a randomly available port by default.
-tools Search path for object tools
Legacy convenience options:
-inuse_space Same as -sample_index=inuse_space
-inuse_objects Same as -sample_index=inuse_objects
-alloc_space Same as -sample_index=alloc_space
-alloc_objects Same as -sample_index=alloc_objects
-total_delay Same as -sample_index=delay
-contentions Same as -sample_index=contentions
-mean_delay Same as -mean -sample_index=delay
Environment Variables:
PPROF_TMPDIR Location for saved profiles (default $HOME/pprof)
PPROF_TOOLS Search path for object-level tools
PPROF_BINARY_PATH Search path for local binary files
default: $HOME/pprof/binaries
searches $name, $path, $buildid/$name, $path/$buildid
* On Windows, %USERPROFILE% is used instead of $HOME
(pprof) help
Commands:
callgrind Outputs a graph in callgrind format
comments Output all profile comments
disasm Output assembly listings annotated with samples
dot Outputs a graph in DOT format
eog Visualize graph through eog
evince Visualize graph through evince
gif Outputs a graph image in GIF format
gv Visualize graph through gv
kcachegrind Visualize report in KCachegrind
list Output annotated source for functions matching regexp
pdf Outputs a graph in PDF format
peek Output callers/callees of functions matching regexp
png Outputs a graph image in PNG format
proto Outputs the profile in compressed protobuf format
ps Outputs a graph in PS format
raw Outputs a text representation of the raw profile
svg Outputs a graph in SVG format
tags Outputs all tags in the profile
text Outputs top entries in text form
top Outputs top entries in text form
topproto Outputs top entries in compressed protobuf format
traces Outputs all profile samples in text form
tree Outputs a text rendering of call graph
web Visualize graph through web browser
weblist Display annotated source in a web browser
o/options List options and their current values
quit/exit/^D Exit pprof
Options:
call_tree Create a context-sensitive call tree
compact_labels Show minimal headers
divide_by Ratio to divide all samples before visualization
drop_negative Ignore negative differences
edgefraction Hide edges below <f>*total
focus Restricts to samples going through a node matching regexp
hide Skips nodes matching regexp
ignore Skips paths going through any nodes matching regexp
mean Average sample value over first value (count)
nodecount Max number of nodes to show
nodefraction Hide nodes below <f>*total
noinlines Ignore inlines.
normalize Scales profile based on the base profile.
output Output filename for file-based outputs
prune_from Drops any functions below the matched frame.
relative_percentages Show percentages relative to focused subgraph
sample_index Sample value to report (0-based index or name)
show Only show nodes matching regexp
show_from Drops functions above the highest matched frame.
source_path Search path for source files
tagfocus Restricts to samples with tags in range or matched by regexp
taghide Skip tags matching this regexp
tagignore Discard samples with tags in range or matched by regexp
tagshow Only consider tags matching this regexp
trim Honor nodefraction/edgefraction/nodecount defaults
trim_path Path to trim from source paths before search
unit Measurement units to display
Option groups (only set one per group):
cumulative
cum Sort entries based on cumulative weight
flat Sort entries based on own weight
granularity
addresses Aggregate at the address level.
filefunctions Aggregate at the function level.
files Aggregate at the file level.
functions Aggregate at the function level.
lines Aggregate at the source code line level.
: Clear focus/ignore/hide/tagfocus/tagignore
type "help <cmd|option>" for more information
$ go tool trace
Usage of 'go tool trace':
Given a trace file produced by 'go test':
go test -trace=trace.out pkg
Open a web browser displaying trace:
go tool trace [flags] [pkg.test] trace.out
Generate a pprof-like profile from the trace:
go tool trace -pprof=TYPE [pkg.test] trace.out
[pkg.test] argument is required for traces produced by Go 1.6 and below.
Go 1.7 does not require the binary argument.
Supported profile types are:
- net: network blocking profile
- sync: synchronization blocking profile
- syscall: syscall blocking profile
- sched: scheduler latency profile
Flags:
-http=addr: HTTP service address (e.g., ':6060')
-pprof=type: print a pprof-like profile instead
-d: print debug info such as parsed events
Note that while the various profiles available when launching
'go tool trace' work on every browser, the trace viewer itself
(the 'view trace' page) comes from the Chrome/Chromium project
and is only actively tested on that browser.
$ GODEBUG=gctrace=1 ./memory
gc 1 @11.703s 0%: 0.009+0.21+0.004 ms clock, 0.072+0/0.21/0.27+0.033 ms cpu, 4->4->3 MB, 5 MB goal, 8 P
gc 2 @23.103s 0%: 0.008+0.19+0.005 ms clock, 0.069+0/0.22/0.35+0.043 ms cpu, 7->7->7 MB, 8 MB goal, 8 P
gc 3 @45.707s 0%: 0.022+0.35+0.005 ms clock, 0.18+0/0.46/0.35+0.047 ms cpu, 14->14->14 MB, 15 MB goal, 8 P
gc 4 @89.806s 0%: 0.010+0.42+0.006 ms clock, 0.086+0/0.47/0.68+0.051 ms cpu, 28->28->28 MB, 29 MB goal, 8 P
scvg0: inuse: 47, idle: 15, sys: 63, released: 0, consumed: 63 (MB)
gc 5 @175.704s 0%: 0.010+0.34+0.005 ms clock, 0.087+0/0.44/0.85+0.047 ms cpu, 55->55->23 MB, 56 MB goal, 8 P
gc 6 @248.402s 0%: 0.010+0.41+0.008 ms clock, 0.086+0/0.44/0.99+0.065 ms cpu, 46->46->15 MB, 47 MB goal, 8 P
gc 7 @295.302s 0%: 0.010+0.20+0.004 ms clock, 0.080+0/0.25/0.55+0.035 ms cpu, 30->30->30 MB, 31 MB goal, 8 P
scvg1: inuse: 32, idle: 31, sys: 63, released: 0, consumed: 63 (MB)
gc 8 @386.702s 0%: 0.011+0.33+0.007 ms clock, 0.088+0/0.46/0.70+0.059 ms cpu, 58->58->27 MB, 60 MB goal, 8 P
scvg2: inuse: 47, idle: 15, sys: 63, released: 0, consumed: 63 (MB)
gc 9 @470.005s 0%: 0.011+0.27+0.006 ms clock, 0.093+0/0.25/0.75+0.050 ms cpu, 53->53->22 MB, 54 MB goal, 8 P
gc 10 @537.503s 0%: 0.010+0.28+0.009 ms clock, 0.087+0/0.27/0.58+0.076 ms cpu, 43->43->12 MB, 44 MB goal, 8 P
gc 11 @574.301s 0%: 0.011+0.27+0.006 ms clock, 0.089+0/0.31/0.74+0.049 ms cpu, 23->23->23 MB, 24 MB goal, 8 P
scvg3: inuse: 32, idle: 31, sys: 63, released: 0, consumed: 63 (MB)
gc 12 @646.001s 0%: 0.006+0.11+0.003 ms clock, 0.052+0/0.14/0.39+0.030 ms cpu, 46->46->14 MB, 47 MB goal, 8 P
gc 13 @691.005s 0%: 0.016+0.26+0.006 ms clock, 0.13+0/0.33/0.36+0.050 ms cpu, 28->28->28 MB, 29 MB goal, 8 P
scvg4: inuse: 48, idle: 15, sys: 63, released: 0, consumed: 63 (MB)
gc 14 @778.602s 0%: 0.021+0.26+0.008 ms clock, 0.17+0/0.34/0.80+0.067 ms cpu, 56->56->25 MB, 57 MB goal, 8 P
gc 15 @854.601s 0%: 0.013+0.29+0.008 ms clock, 0.11+0/0.34/0.77+0.064 ms cpu, 48->48->17 MB, 50 MB goal, 8 P
scvg5: inuse: 32, idle: 31, sys: 63, released: 0, consumed: 63 (MB)
gc 16 @907.902s 0%: 0.011+0.22+0.006 ms clock, 0.093+0/0.23/0.48+0.049 ms cpu, 34->34->2 MB, 35 MB goal, 8 P
gc 17 @916.902s 0%: 0.016+0.24+0.005 ms clock, 0.12+0/0.25/0.48+0.046 ms cpu, 5->5->5 MB, 6 MB goal, 8 P
gc 18 @934.507s 0%: 0.034+0.33+0.005 ms clock, 0.27+0/0.41/0.75+0.047 ms cpu, 11->11->11 MB, 12 MB goal, 8 P
gc 19 @968.802s 0%: 0.017+0.24+0.007 ms clock, 0.14+0/0.30/0.60+0.063 ms cpu, 22->22->22 MB, 23 MB goal, 8 P
gc 20 @1035.706s 0%: 0.071+0.25+0.006 ms clock, 0.57+0/0.33/0.61+0.049 ms cpu, 42->42->11 MB, 44 MB goal, 8 P
scvg6: inuse: 16, idle: 46, sys: 63, released: 0, consumed: 63 (MB)
gc 21 @1071.302s 0%: 0.011+0.24+0.006 ms clock, 0.089+0/0.28/0.72+0.048 ms cpu, 22->22->22 MB, 23 MB goal, 8 P
gc 22 @1140.611s 0%: 0.045+0.25+0.006 ms clock, 0.36+0/0.25/0.70+0.052 ms cpu, 44->44->13 MB, 45 MB goal, 8 P
gc 23 @1180.905s 0%: 0.034+0.34+0.006 ms clock, 0.27+0/0.32/0.72+0.051 ms cpu, 25->25->25 MB, 26 MB goal, 8 P
scvg7: inuse: 32, idle: 30, sys: 63, released: 0, consumed: 63 (MB)
gc 24 @1259.503s 0%: 0.018+0.24+0.008 ms clock, 0.14+0/0.32/0.51+0.067 ms cpu, 50->50->19 MB, 51 MB goal, 8 P
gc 25 @1317.902s 0%: 0.011+0.24+0.006 ms clock, 0.091+0/0.26/0.48+0.053 ms cpu, 37->37->6 MB, 38 MB goal, 8 P
gc 26 @1336.809s 0%: 0.034+0.22+0.006 ms clock, 0.27+0/0.27/0.53+0.053 ms cpu, 12->12->12 MB, 13 MB goal, 8 P
scvg8: inuse: 17, idle: 46, sys: 63, released: 0, consumed: 63 (MB)
gc 27 @1373.707s 0%: 0.037+0.29+0.006 ms clock, 0.29+0/0.33/0.78+0.051 ms cpu, 23->23->23 MB, 24 MB goal, 8 P
gc 28 @1445.610s 0%: 0.022+0.29+0.006 ms clock, 0.17+0/0.37/0.64+0.048 ms cpu, 46->46->14 MB, 47 MB goal, 8 P
gc 29 @1490.911s 0%: 0.008+0.19+0.004 ms clock, 0.067+0/0.25/0.66+0.036 ms cpu, 29->29->29 MB, 30 MB goal, 8 P
scvg9: inuse: 32, idle: 30, sys: 63, released: 0, consumed: 63 (MB)
gc 30 @1579.201s 0%: 0.010+0.25+0.006 ms clock, 0.083+0/0.30/0.72+0.048 ms cpu, 56->56->25 MB, 58 MB goal, 8 P
scvg10: inuse: 48, idle: 15, sys: 63, released: 0, consumed: 63 (MB)
gc 31 @1656.509s 0%: 0.010+0.26+0.006 ms clock, 0.085+0/0.33/0.64+0.048 ms cpu, 49->49->18 MB, 50 MB goal, 8 P
gc 32 @1712.403s 0%: 0.014+0.27+0.006 ms clock, 0.11+0/0.37/0.46+0.049 ms cpu, 35->35->4 MB, 36 MB goal, 8 P
gc 33 @1726.509s 0%: 0.011+0.22+0.006 ms clock, 0.091+0/0.27/0.51+0.050 ms cpu, 9->9->9 MB, 10 MB goal, 8 P
gc 34 @1754.002s 0%: 0.016+0.26+0.006 ms clock, 0.13+0/0.34/0.56+0.050 ms cpu, 17->17->17 MB, 18 MB goal, 8 P
scvg11: inuse: 32, idle: 30, sys: 63, released: 0, consumed: 63 (MB)
gc 35 @1807.606s 0%: 0.033+0.27+0.006 ms clock, 0.26+0/0.26/0.53+0.052 ms cpu, 34->34->3 MB, 35 MB goal, 8 P
gc 36 @1817.204s 0%: 0.024+0.23+0.006 ms clock, 0.19+0/0.28/0.50+0.051 ms cpu, 6->6->6 MB, 7 MB goal, 8 P
gc 37 @1835.905s 0%: 0.085+0.31+0.007 ms clock, 0.68+0/0.32/0.54+0.056 ms cpu, 12->12->12 MB, 13 MB goal, 8 P
gc 38 @1872.402s 0%: 0.011+0.25+0.009 ms clock, 0.090+0/0.30/0.63+0.074 ms cpu, 23->23->23 MB, 24 MB goal, 8 P
gc 39 @1943.602s 0%: 0.011+0.24+0.006 ms clock, 0.090+0/0.27/0.46+0.049 ms cpu, 45->45->14 MB, 46 MB goal, 8 P
scvg12: inuse: 17, idle: 46, sys: 63, released: 0, consumed: 63 (MB)
gc 40 @1987.501s 0%: 0.007+0.20+0.004 ms clock, 0.063+0/0.25/0.65+0.033 ms cpu, 28->28->28 MB, 29 MB goal, 8 P
gc 41 @2073.102s 0%: 0.050+0.29+0.006 ms clock, 0.40+0/0.32/0.72+0.050 ms cpu, 54->54->23 MB, 56 MB goal, 8 P
scvg13: inuse: 33, idle: 30, sys: 63, released: 0, consumed: 63 (MB)
gc 42 @2145.107s 0%: 0.041+0.27+0.006 ms clock, 0.33+0/0.46/0.63+0.049 ms cpu, 46->46->14 MB, 47 MB goal, 8 P
gc 43 @2190.606s 0%: 0.11+0.25+0.008 ms clock, 0.91+0/0.36/0.77+0.065 ms cpu, 29->29->29 MB, 30 MB goal, 8 P
scvg14: inuse: 48, idle: 14, sys: 63, released: 0, consumed: 63 (MB)
gc 44 @2279.311s 0%: 0.010+0.26+0.006 ms clock, 0.084+0/0.31/0.83+0.052 ms cpu, 56->56->25 MB, 58 MB goal, 8 P
gc 45 @2357.406s 0%: 0.061+0.27+0.006 ms clock, 0.48+0/0.31/0.68+0.050 ms cpu, 50->50->18 MB, 51 MB goal, 8 P
scvg15: inuse: 33, idle: 30, sys: 63, released: 0, consumed: 63 (MB)
gc 46 @2414.805s 0%: 0.010+0.16+0.014 ms clock, 0.087+0/0.19/0.33+0.11 ms cpu, 36->36->5 MB, 37 MB goal, 8 P
gc 47 @2431.810s 0%: 0.023+0.33+0.007 ms clock, 0.18+0/0.32/0.51+0.057 ms cpu, 10->10->10 MB, 11 MB goal, 8 P
gc 48 @2465.009s 0%: 0.048+0.29+0.005 ms clock, 0.38+0/0.35/0.72+0.046 ms cpu, 21->21->21 MB, 22 MB goal, 8 P
gc 49 @2529.707s 0%: 0.10+0.24+0.006 ms clock, 0.86+0/0.32/0.61+0.048 ms cpu, 41->41->10 MB, 42 MB goal, 8 P
scvg16: inuse: 17, idle: 45, sys: 63, released: 0, consumed: 63 (MB)
gc 50 @2561.012s 0%: 0.024+0.31+0.006 ms clock, 0.19+0/0.33/0.70+0.050 ms cpu, 20->20->20 MB, 21 MB goal, 8 P
gc 51 @2622.011s 0%: 0.010+0.30+0.007 ms clock, 0.086+0/0.35/0.56+0.056 ms cpu, 39->39->7 MB, 40 MB goal, 8 P
gc 52 @2646.003s 0%: 0.009+0.22+0.005 ms clock, 0.076+0/0.25/0.47+0.045 ms cpu, 15->15->15 MB, 16 MB goal, 8 P
gc 53 @2692.902s 0%: 0.011+0.24+0.006 ms clock, 0.093+0/0.32/0.74+0.049 ms cpu, 30->30->30 MB, 31 MB goal, 8 P
scvg17: 0 MB released
scvg17: inuse: 33, idle: 30, sys: 63, released: 0, consumed: 63 (MB)
gc 54 @2784.303s 0%: 0.011+0.28+0.006 ms clock, 0.093+0/0.34/0.83+0.050 ms cpu, 58->58->27 MB, 60 MB goal, 8 P
scvg18: inuse: 48, idle: 14, sys: 63, released: 14, consumed: 48 (MB)
gc 55 @2867.607s 0%: 0.011+0.24+0.006 ms clock, 0.088+0/0.30/0.52+0.050 ms cpu, 53->53->22 MB, 54 MB goal, 8 P
gc 56 @2935.203s 0%: 0.019+0.23+0.023 ms clock, 0.15+0/0.30/0.65+0.19 ms cpu, 43->43->12 MB, 44 MB goal, 8 P
gc 57 @2972.103s 0%: 0.034+0.32+0.006 ms clock, 0.27+0/0.40/0.45+0.050 ms cpu, 23->23->23 MB, 24 MB goal, 8 P
scvg19: inuse: 33, idle: 30, sys: 63, released: 16, consumed: 46 (MB)
gc 58 @3044.011s 0%: 0.025+0.29+0.005 ms clock, 0.20+0/0.28/0.72+0.047 ms cpu, 46->46->14 MB, 47 MB goal, 8 P
gc 59 @3089.308s 0%: 0.060+0.31+0.007 ms clock, 0.48+0/0.33/0.75+0.056 ms cpu, 29->29->29 MB, 30 MB goal, 8 P
scvg20: inuse: 49, idle: 14, sys: 63, released: 14, consumed: 49 (MB)
gc 60 @3177.703s 0%: 0.041+0.41+0.008 ms clock, 0.33+0/0.48/0.69+0.071 ms cpu, 56->56->25 MB, 58 MB goal, 8 P
gc 61 @3255.110s 0%: 0.020+0.24+0.006 ms clock, 0.16+0/0.30/0.72+0.049 ms cpu, 49->49->18 MB, 50 MB goal, 8 P
scvg21: inuse: 33, idle: 29, sys: 63, released: 14, consumed: 48 (MB)
gc 62 @3311.110s 0%: 0.12+0.23+0.007 ms clock, 0.97+0/0.24/0.49+0.059 ms cpu, 35->35->4 MB, 36 MB goal, 8 P
gc 63 @3325.511s 0%: 0.012+0.15+0.004 ms clock, 0.098+0/0.20/0.39+0.036 ms cpu, 9->9->9 MB, 10 MB goal, 8 P
gc 64 @3353.608s 0%: 0.043+0.24+0.006 ms clock, 0.34+0/0.36/0.50+0.049 ms cpu, 18->18->18 MB, 19 MB goal, 8 P
gc 65 @3408.311s 0%: 0.012+0.26+0.006 ms clock, 0.098+0/0.27/0.52+0.049 ms cpu, 35->35->3 MB, 36 MB goal, 8 P
gc 66 @3420.110s 0%: 0.010+0.25+0.005 ms clock, 0.087+0/0.25/0.58+0.046 ms cpu, 7->7->7 MB, 8 MB goal, 8 P
gc 67 @3443.110s 0%: 0.012+0.23+0.006 ms clock, 0.097+0/0.28/0.63+0.048 ms cpu, 14->14->14 MB, 15 MB goal, 8 P
scvg22: inuse: 17, idle: 45, sys: 63, released: 0, consumed: 63 (MB)
gc 68 @3487.903s 0%: 0.011+0.26+0.004 ms clock, 0.092+0/0.29/0.67+0.037 ms cpu, 28->28->28 MB, 29 MB goal, 8 P
gc 69 @3575.304s 0%: 0.011+0.24+0.006 ms clock, 0.093+0/0.33/0.87+0.050 ms cpu, 56->56->24 MB, 57 MB goal, 8 P
scvg23: inuse: 33, idle: 29, sys: 63, released: 20, consumed: 42 (MB)
gc 70 @3650.811s 0%: 0.036+0.30+0.006 ms clock, 0.29+0/0.30/0.62+0.048 ms cpu, 48->48->17 MB, 49 MB goal, 8 P
gc 71 @3703.207s 0%: 0.011+0.24+0.006 ms clock, 0.088+0/0.23/0.50+0.048 ms cpu, 33->33->2 MB, 34 MB goal, 8 P
gc 72 @3710.409s 0%: 0.040+0.27+0.006 ms clock, 0.32+0/0.24/0.58+0.050 ms cpu, 4->4->4 MB, 5 MB goal, 8 P
gc 73 @3724.511s 0%: 0.020+0.37+0.006 ms clock, 0.16+0/0.44/0.46+0.052 ms cpu, 9->9->9 MB, 10 MB goal, 8 P
scvg24: inuse: 18, idle: 45, sys: 63, released: 29, consumed: 34 (MB)
gc 74 @3752.012s 0%: 0.022+0.29+0.006 ms clock, 0.17+0/0.38/0.67+0.052 ms cpu, 17->17->17 MB, 18 MB goal, 8 P
gc 75 @3805.608s 0%: 0.080+0.24+0.006 ms clock, 0.64+0/0.24/0.55+0.051 ms cpu, 34->34->3 MB, 35 MB goal, 8 P
gc 76 @3815.202s 0%: 0.011+0.23+0.005 ms clock, 0.091+0/0.23/0.47+0.047 ms cpu, 6->6->6 MB, 7 MB goal, 8 P
gc 77 @3833.902s 0%: 0.010+0.22+0.008 ms clock, 0.085+0/0.28/0.51+0.065 ms cpu, 12->12->12 MB, 13 MB goal, 8 P
gc 78 @3870.403s 0%: 0.041+0.23+0.007 ms clock, 0.32+0/0.31/0.77+0.062 ms cpu, 23->23->23 MB, 24 MB goal, 8 P
scvg25: inuse: 33, idle: 29, sys: 63, released: 28, consumed: 35 (MB)
gc 79 @3941.505s 0%: 0.094+0.31+0.063 ms clock, 0.75+0/0.37/0.73+0.50 ms cpu, 45->45->14 MB, 46 MB goal, 8 P
gc 80 @3985.311s 0%: 0.080+0.22+0.006 ms clock, 0.64+0/0.31/0.47+0.049 ms cpu, 28->28->28 MB, 29 MB goal, 8 P
scvg26: inuse: 49, idle: 14, sys: 63, released: 14, consumed: 49 (MB)
gc 81 @4070.701s 0%: 0.009+0.18+0.004 ms clock, 0.079+0/0.23/0.53+0.036 ms cpu, 54->54->23 MB, 56 MB goal, 8 P
gc 82 @4142.306s 0%: 0.064+0.27+0.006 ms clock, 0.51+0/0.29/0.61+0.051 ms cpu, 46->46->14 MB, 47 MB goal, 8 P
gc 83 @4187.104s 0%: 0.010+0.29+0.006 ms clock, 0.080+0/0.36/0.59+0.050 ms cpu, 28->28->28 MB, 29 MB goal, 8 P
scvg27: inuse: 33, idle: 29, sys: 63, released: 16, consumed: 47 (MB)
gc 84 @4274.406s 0%: 0.082+0.27+0.005 ms clock, 0.65+0/0.37/0.67+0.044 ms cpu, 56->56->24 MB, 57 MB goal, 8 P
gc 85 @4349.811s 0%: 0.074+0.30+0.006 ms clock, 0.59+0/0.38/0.74+0.053 ms cpu, 48->48->17 MB, 49 MB goal, 8 P
scvg28: inuse: 18, idle: 45, sys: 63, released: 8, consumed: 54 (MB)
gc 86 @4401.808s 0%: 0.012+0.23+0.006 ms clock, 0.096+0/0.24/0.58+0.054 ms cpu, 33->33->2 MB, 34 MB goal, 8 P
gc 87 @4408.310s 0%: 0.012+0.23+0.006 ms clock, 0.098+0/0.28/0.52+0.052 ms cpu, 4->4->4 MB, 5 MB goal, 8 P
gc 88 @4421.008s 0%: 0.022+0.25+0.006 ms clock, 0.17+0/0.30/0.66+0.049 ms cpu, 8->8->8 MB, 9 MB goal, 8 P
gc 89 @4445.808s 0%: 0.009+0.19+0.005 ms clock, 0.076+0/0.20/0.40+0.043 ms cpu, 15->15->15 MB, 16 MB goal, 8 P
gc 90 @4494.207s 0%: 0.014+0.33+0.008 ms clock, 0.11+0/0.44/0.70+0.067 ms cpu, 31->31->31 MB, 32 MB goal, 8 P
scvg29: inuse: 34, idle: 29, sys: 63, released: 26, consumed: 36 (MB)
gc 91 @4588.509s 0%: 0.011+0.27+0.006 ms clock, 0.092+0/0.34/0.83+0.049 ms cpu, 60->60->29 MB, 62 MB goal, 8 P
scvg30: inuse: 49, idle: 13, sys: 63, released: 13, consumed: 49 (MB)
gc 92 @4677.504s 0%: 0.033+0.34+0.006 ms clock, 0.26+0/0.44/0.88+0.053 ms cpu, 57->57->25 MB, 58 MB goal, 8 P
gc 93 @4756.211s 0%: 0.011+0.33+0.006 ms clock, 0.091+0/0.34/0.64+0.048 ms cpu, 50->50->19 MB, 51 MB goal, 8 P
scvg31: inuse: 34, idle: 29, sys: 63, released: 16, consumed: 47 (MB)
gc 94 @4814.709s 0%: 0.035+0.29+0.007 ms clock, 0.28+0/0.30/0.53+0.058 ms cpu, 37->37->6 MB, 38 MB goal, 8 P
gc 95 @4833.910s 0%: 0.046+0.25+0.006 ms clock, 0.36+0/0.33/0.64+0.051 ms cpu, 12->12->12 MB, 13 MB goal, 8 P
gc 96 @4871.405s 0%: 0.071+0.29+0.007 ms clock, 0.57+0/0.41/0.69+0.060 ms cpu, 24->24->24 MB, 25 MB goal, 8 P
gc 97 @4944.505s 0%: 0.066+0.27+0.006 ms clock, 0.53+0/0.29/0.63+0.054 ms cpu, 46->46->15 MB, 48 MB goal, 8 P
scvg32: inuse: 18, idle: 44, sys: 63, released: 15, consumed: 47 (MB)
gc 98 @4992.106s 0%: 0.024+0.30+0.006 ms clock, 0.19+0/0.35/0.78+0.048 ms cpu, 30->30->30 MB, 31 MB goal, 8 P
gc 99 @5084.906s 0%: 0.013+0.26+0.006 ms clock, 0.10+0/0.33/0.82+0.051 ms cpu, 59->59->28 MB, 61 MB goal, 8 P
scvg33: inuse: 34, idle: 29, sys: 63, released: 14, consumed: 49 (MB)
gc 100 @5171.006s 0%: 0.012+0.30+0.007 ms clock, 0.10+0/0.36/0.90+0.057 ms cpu, 55->55->24 MB, 56 MB goal, 8 P
gc 101 @5244.002s 0%: 0.010+0.24+0.006 ms clock, 0.086+0/0.25/0.43+0.055 ms cpu, 46->46->15 MB, 48 MB goal, 8 P
scvg34: inuse: 18, idle: 44, sys: 63, released: 12, consumed: 50 (MB)
gc 102 @5291.403s 0%: 0.011+0.25+0.008 ms clock, 0.090+0/0.33/0.80+0.066 ms cpu, 30->30->30 MB, 31 MB goal, 8 P
gc 103 @5383.803s 0%: 0.011+0.29+0.006 ms clock, 0.092+0/0.35/0.94+0.053 ms cpu, 59->59->28 MB, 60 MB goal, 8 P
scvg35: inuse: 34, idle: 29, sys: 63, released: 0, consumed: 62 (MB)
gc 104 @5469.107s 0%: 0.011+0.37+0.008 ms clock, 0.090+0/0.40/0.77+0.064 ms cpu, 54->54->23 MB, 56 MB goal, 8 P
gc 105 @5540.506s 0%: 0.031+0.30+0.006 ms clock, 0.25+0/0.31/0.71+0.050 ms cpu, 45->45->14 MB, 47 MB goal, 8 P
scvg36: inuse: 18, idle: 44, sys: 63, released: 12, consumed: 50 (MB)
gc 106 @5584.907s 0%: 0.099+0.33+0.008 ms clock, 0.79+0/0.43/0.72+0.068 ms cpu, 28->28->28 MB, 29 MB goal, 8 P
gc 107 @5671.406s 0%: 0.031+0.18+0.005 ms clock, 0.25+0/0.21/0.61+0.040 ms cpu, 55->55->24 MB, 56 MB goal, 8 P
scvg37: inuse: 34, idle: 28, sys: 63, released: 17, consumed: 46 (MB)
gc 108 @5745.210s 0%: 0.012+0.27+0.006 ms clock, 0.10+0/0.31/0.67+0.049 ms cpu, 47->47->16 MB, 48 MB goal, 8 P
gc 109 @5794.303s 0%: 0.008+0.23+0.006 ms clock, 0.067+0.061/0.25/0.71+0.049 ms cpu, 31->31->31 MB, 32 MB goal, 8 P
scvg38: inuse: 50, idle: 13, sys: 63, released: 13, consumed: 50 (MB)
gc 110 @5890.006s 0%: 0.033+0.23+0.006 ms clock, 0.26+0/0.27/0.53+0.051 ms cpu, 61->61->30 MB, 63 MB goal, 8 P
gc 111 @5981.705s 0%: 0.016+0.22+0.005 ms clock, 0.13+0/0.30/0.50+0.042 ms cpu, 58->58->27 MB, 60 MB goal, 8 P
scvg39: inuse: 34, idle: 28, sys: 63, released: 13, consumed: 49 (MB)
gc 112 @6065.606s 0%: 0.020+0.29+0.004 ms clock, 0.16+0/0.37/0.71+0.036 ms cpu, 53->53->22 MB, 55 MB goal, 8 P
gc 113 @6134.304s 0%: 0.010+0.17+0.004 ms clock, 0.085+0/0.22/0.44+0.033 ms cpu, 44->44->12 MB, 45 MB goal, 8 P
scvg40: inuse: 19, idle: 44, sys: 63, released: 2, consumed: 60 (MB)
gc 114 @6173.404s 0%: 0.018+0.25+0.011 ms clock, 0.14+0/0.32/0.78+0.092 ms cpu, 25->25->25 MB, 26 MB goal, 8 P
gc 115 @6249.602s 0%: 0.010+0.26+0.006 ms clock, 0.085+0/0.31/0.68+0.051 ms cpu, 48->48->17 MB, 50 MB goal, 8 P
scvg41: inuse: 34, idle: 28, sys: 63, released: 13, consumed: 49 (MB)
gc 116 @6303.302s 0%: 0.011+0.20+0.006 ms clock, 0.092+0/0.26/0.49+0.048 ms cpu, 34->34->3 MB, 35 MB goal, 8 P
gc 117 @6313.105s 0%: 0.006+0.12+0.003 ms clock, 0.049+0/0.15/0.41+0.029 ms cpu, 6->6->6 MB, 7 MB goal, 8 P
gc 118 @6332.206s 0%: 0.011+0.33+0.006 ms clock, 0.091+0/0.33/0.72+0.052 ms cpu, 12->12->12 MB, 13 MB goal, 8 P
gc 119 @6369.503s 0%: 0.010+0.37+0.015 ms clock, 0.085+0/0.38/0.59+0.12 ms cpu, 23->23->23 MB, 24 MB goal, 8 P
gc 120 @6442.206s 0%: 0.006+0.19+0.004 ms clock, 0.051+0/0.23/0.31+0.032 ms cpu, 46->46->15 MB, 47 MB goal, 8 P
scvg42: 0 MB released
scvg42: inuse: 19, idle: 44, sys: 63, released: 13, consumed: 50 (MB)
gc 121 @6489.106s 0%: 0.011+0.28+0.005 ms clock, 0.093+0/0.34/0.77+0.047 ms cpu, 30->30->30 MB, 31 MB goal, 8 P
gc 122 @6580.502s 0%: 0.009+0.18+0.004 ms clock, 0.079+0/0.26/0.46+0.038 ms cpu, 58->58->27 MB, 60 MB goal, 8 P
scvg43: inuse: 34, idle: 28, sys: 63, released: 15, consumed: 48 (MB)
gc 123 @6663.801s 0%: 0.007+0.24+0.004 ms clock, 0.063+0/0.20/0.60+0.035 ms cpu, 53->53->22 MB, 54 MB goal, 8 P
gc 124 @6731.406s 0%: 0.015+0.26+0.007 ms clock, 0.12+0/0.32/0.56+0.063 ms cpu, 43->43->12 MB, 44 MB goal, 8 P
scvg44: inuse: 19, idle: 44, sys: 63, released: 34, consumed: 28 (MB)
gc 125 @6768.305s 0%: 0.013+0.26+0.007 ms clock, 0.10+0/0.42/0.64+0.062 ms cpu, 23->23->23 MB, 24 MB goal, 8 P
gc 126 @6840.205s 0%: 0.006+0.20+0.005 ms clock, 0.055+0/0.23/0.49+0.046 ms cpu, 46->46->14 MB, 47 MB goal, 8 P
gc 127 @6885.603s 0%: 0.056+0.27+0.007 ms clock, 0.45+0/0.35/0.79+0.063 ms cpu, 29->29->29 MB, 30 MB goal, 8 P
scvg45: inuse: 35, idle: 28, sys: 63, released: 20, consumed: 42 (MB)
gc 128 @6974.002s 0%: 0.011+0.35+0.006 ms clock, 0.091+0/0.43/0.67+0.048 ms cpu, 56->56->25 MB, 58 MB goal, 8 P
gc 129 @7051.503s 0%: 0.011+0.29+0.006 ms clock, 0.091+0/0.35/0.84+0.048 ms cpu, 49->49->18 MB, 51 MB goal, 8 P
scvg46: inuse: 19, idle: 43, sys: 63, released: 12, consumed: 50 (MB)
gc 130 @7107.703s 0%: 0.016+0.21+0.004 ms clock, 0.13+0/0.29/0.57+0.036 ms cpu, 36->36->4 MB, 37 MB goal, 8 P
gc 131 @7122.402s 0%: 0.011+0.26+0.006 ms clock, 0.095+0/0.25/0.53+0.052 ms cpu, 9->9->9 MB, 10 MB goal, 8 P
gc 132 @7151.102s 0%: 0.008+0.17+0.004 ms clock, 0.071+0/0.21/0.50+0.036 ms cpu, 18->18->18 MB, 19 MB goal, 8 P
scvg47: inuse: 35, idle: 28, sys: 63, released: 28, consumed: 35 (MB)
gc 133 @7207.005s 0%: 0.006+0.14+0.004 ms clock, 0.053+0/0.15/0.33+0.039 ms cpu, 35->35->4 MB, 36 MB goal, 8 P
gc 134 @7221.202s 0%: 0.008+0.18+0.004 ms clock, 0.066+0/0.24/0.44+0.034 ms cpu, 9->9->9 MB, 10 MB goal, 8 P
gc 135 @7248.901s 0%: 0.011+0.24+0.007 ms clock, 0.091+0/0.28/0.60+0.062 ms cpu, 17->17->17 MB, 18 MB goal, 8 P
gc 136 @7302.906s 0%: 0.007+0.13+0.005 ms clock, 0.060+0/0.15/0.34+0.041 ms cpu, 34->34->3 MB, 35 MB goal, 8 P
gc 137 @7313.306s 0%: 0.011+0.23+0.008 ms clock, 0.089+0/0.29/0.57+0.070 ms cpu, 6->6->6 MB, 7 MB goal, 8 P
gc 138 @7333.603s 0%: 0.012+0.31+0.005 ms clock, 0.096+0/0.46/0.36+0.046 ms cpu, 13->13->13 MB, 14 MB goal, 8 P
scvg48: 0 MB released
scvg48: inuse: 19, idle: 43, sys: 63, released: 26, consumed: 36 (MB)
gc 139 @7373.204s 0%: 0.019+0.30+0.006 ms clock, 0.15+0/0.36/0.62+0.049 ms cpu, 25->25->25 MB, 26 MB goal, 8 P
gc 140 @7450.306s 0%: 0.053+1.4+0.005 ms clock, 0.42+0/0.24/0.56+0.041 ms cpu, 49->49->18 MB, 50 MB goal, 8 P
scvg49: inuse: 35, idle: 28, sys: 63, released: 13, consumed: 50 (MB)
gc 141 @7505.801s 0%: 0.008+0.43+0.005 ms clock, 0.068+0/0.57/0.75+0.045 ms cpu, 35->35->4 MB, 36 MB goal, 8 P
gc 142 @7519.105s 0%: 0.008+0.20+0.004 ms clock, 0.066+0/0.25/0.37+0.035 ms cpu, 8->8->8 MB, 9 MB goal, 8 P
gc 143 @7545.105s 0%: 0.008+0.16+0.004 ms clock, 0.071+0/0.21/0.51+0.036 ms cpu, 16->16->16 MB, 17 MB goal, 8 P
gc 144 @7595.706s 0%: 0.029+0.27+0.006 ms clock, 0.23+0/0.30/0.40+0.050 ms cpu, 32->32->1 MB, 33 MB goal, 8 P
gc 145 @7604.502s 0%: 0.008+0.19+0.009 ms clock, 0.064+0/0.20/0.50+0.078 ms cpu, 4->4->4 MB, 5 MB goal, 8 P
gc 146 @7616.706s 0%: 0.008+0.19+0.005 ms clock, 0.071+0/0.25/0.19+0.041 ms cpu, 7->7->7 MB, 8 MB goal, 8 P
gc 147 @7640.505s 0%: 0.006+0.16+0.004 ms clock, 0.055+0/0.22/0.26+0.033 ms cpu, 15->15->15 MB, 16 MB goal, 8 P
scvg50: inuse: 19, idle: 43, sys: 63, released: 18, consumed: 44 (MB)
gc 148 @7686.903s 0%: 0.010+0.26+0.005 ms clock, 0.085+0/0.34/0.91+0.047 ms cpu, 29->29->29 MB, 30 MB goal, 8 P
gc 149 @7777.303s 0%: 0.027+0.32+0.004 ms clock, 0.21+0/0.36/0.55+0.038 ms cpu, 58->58->26 MB, 59 MB goal, 8 P
scvg51: inuse: 35, idle: 27, sys: 63, released: 17, consumed: 46 (MB)
gc 150 @7858.705s 0%: 0.017+0.26+0.007 ms clock, 0.13+0/0.35/0.59+0.063 ms cpu, 52->52->21 MB, 53 MB goal, 8 P
gc 151 @7922.606s 0%: 0.016+0.22+0.006 ms clock, 0.12+0/0.30/0.51+0.055 ms cpu, 41->41->9 MB, 42 MB goal, 8 P
gc 152 @7952.301s 0%: 0.011+0.18+0.004 ms clock, 0.091+0/0.20/0.46+0.033 ms cpu, 19->19->19 MB, 20 MB goal, 8 P
scvg52: inuse: 19, idle: 43, sys: 63, released: 4, consumed: 59 (MB)
gc 153 @8010.203s 0%: 0.043+0.32+0.006 ms clock, 0.34+0/0.34/0.67+0.053 ms cpu, 37->37->5 MB, 38 MB goal, 8 P
EOF