add website

This commit is contained in:
dawn_zhou 2022-01-20 20:14:54 +08:00
parent 9a7cef9e54
commit a7279dc83b
156 changed files with 18847 additions and 1 deletions

54
.github/workflows/documentation.yml vendored Normal file
View File

@ -0,0 +1,54 @@
name: deploy
on:
pull_request:
branches: [main]
paths: 'website/**'
push:
branches: [main]
paths: 'website/**'
jobs:
checks:
if: github.event_name != 'push'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- uses: actions/setup-node@v1
with:
node-version: '14.x'
- name: Test Build
working-directory: website
run: |
if [ -e yarn.lock ]; then
yarn install --frozen-lockfile
elif [ -e package-lock.json ]; then
npm ci
else
npm i
fi
npm run build
gh-release:
if: github.event_name != 'pull_request'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v1
- uses: actions/setup-node@v1
with:
node-version: '14.x'
- name: Build
working-directory: website
run: |
if [ -e yarn.lock ]; then
yarn install --frozen-lockfile
elif [ -e package-lock.json ]; then
npm ci
else
npm i
fi
npm run build
- name: Release to GitHub Pages
uses: peaceiris/actions-gh-pages@v3
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./website/build

23
.gitignore vendored
View File

@ -1,4 +1,25 @@
**/.DS_Store
.idea
node_modules
docs/.vuepress/dist
docs/.vuepress/dist
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

20
website/.gitignore vendored Normal file
View File

@ -0,0 +1,20 @@
# Dependencies
/node_modules
# Production
/build
# Generated files
.docusaurus
.cache-loader
# Misc
.DS_Store
.env.local
.env.development.local
.env.test.local
.env.production.local
npm-debug.log*
yarn-debug.log*
yarn-error.log*

6
website/README.md Normal file
View File

@ -0,0 +1,6 @@
# go-zero documentation
Documentation is the interface between the product and the user. A well-written and well-organized set of documents helps users understand your product quickly. Our goal is to help you find and understand the information you need as quickly as possible.
[website](https://go-zero.dev)

3
website/babel.config.js Normal file
View File

@ -0,0 +1,3 @@
module.exports = {
presets: [require.resolve('@docusaurus/core/lib/babel/preset')],
};

115
website/blog/2022-01-03.md Normal file
View File

@ -0,0 +1,115 @@
---
slug: 梦想总是要有的 - 工作20年程序员的2021年终总结
title: 梦想总是要有的 - 工作20年程序员的2021年终总结
authors:
name: kevwan
title: go-zero作者
url: https://github.com/kevwan
image_url: https://github.com/kevwan.png
tags: [dream, go-zero, golang]
---
跌宕起伏的2021年快要过去了今年对我来说经历的实在太多提笔做个简单的总结吧。
### 回顾目标
去年的年终总结我给自己立了两个flag。
![Docusaurus Plushie](./img/flag.jpeg)
第一个虽然不可量化不是一个好的目标但我认为完成的还是不错的go-zero 的工程效率已经得到了社区的广泛认可,感谢所有使用和给我们反馈的小伙伴们!
第二个目标,虽然很调侃的用了一个“小”字,我当时觉得是很难的,但梦想真的还是要有的,万一我们实现了呢!正如我之前视频采访里说过的,人总要给自己制造一点困难,毕竟困难使人进步嘛。
![Docusaurus Plushie](./img/wechat.jpeg)
这是 go-zero 开源一周年纪念日我发的朋友圈。
### 不平凡的 2021
这一年,对教育行业、对团队、对我都是非常非常不容易的。感谢好未来,虽然我离开了,但好未来确实是一个很不错的公司,期待它能够渡过磨难,再次启航!感谢晓黑板,我为之奋斗了四年,有彼此深深信任的合伙人,有志同道合、危难时刻一起冲锋陷阵的小伙伴。这真的是一段永生难忘的职业经历!
有太多的感谢和不舍,但人总是要往前看的,凡是过往,皆为序章。
### 深度参与技术社区
我本着开源精神,将好的技术和思考带给更多开发者,帮助更多开发者提升服务稳定性和开发效率,同时也提高技术认知,做了很多场深度技术分享,也给大家带来了一些技术人成长和思考的分享。
作为技术人,很多时候,我们去听一场分享,总是希望听到实实在在的技术干货,回去就能落到代码上,这真的是没错的,而且也是技术人的孜孜追求。但做了很多分享之后,我逐渐发现,其实我分享的很多有深度的技术远没有我分享的那些技术理念和设计思想对大家的影响那么深远。
对我来说,分享一个有深度的技术是授人以鱼,分享一个好的技术理念是授人以渔。
可能对听众来说,预期是来听干货的,学到好的理念那算是额外的 bonus。
就技术干货分享来说ArchSummit 这个分享反馈还是很不错的。
![Docusaurus Plushie](./img/star.jpeg)
演讲视频https://www.zhihu.com/zvideo/1398226082663809024
其实我做了更多关于技术理念的分享,其中之一就是讲我为啥给项目取 go-zero 这个名字,我希望解决问题的时候是我们回到原点去思考,而不是遇到钉子就找锤子。正所谓:做正确的事,正确的做事!
我们在工作中遇到很多问题都是表象,如果你深究下去,或许这个问题本身就不存在,或者问题本身就是错的。比如最近跟老许(许式伟)做 Go+ 的时候,我们去实现了自己的 packages.Load需要去解决 package 多次加载的缓存问题,为此写了很多代码,实现极其复杂。最后老许发现其实我们做的这些通过 Go 的一个命令就能解决结果就是删除了上千行代码效率提升了近20倍而且健壮性极大提升。
最近马斯克带火了第一性原理,我的理解大体上就是:从根本出发,剔除干扰因素和惯性思考。其实我觉得跟 go-zero 这个名字说的也是同一个道理。
再比如,我应字节技术学院邀请在字节做了个技术分享,分享完,大家对干货内容给了蛮好的评价,但过后基本就忘记了,或者落地完就结束了。但有个同学很久之后微信跟我说,当时分享到现在记得最深刻、对他影响最大的是,当时有人问做前端的同学如果转做后端可行吗,我说:只要热爱,前端能做好,后端一定也能做好,技能是可以迁移的,能力是相通的。我说这些是有事实依据的,我自己做过不少前端,团队有三个大牛后端程序员最早都是做移动端的。这段送给前端想转后端又有所担心的同学,共勉!
### 开源进展
`go-zero` 收录于 CNCF 云原生技术全景图
`go-zero``CNCF Landscape` 收录:`https://landscape.cncf.io/?selected=go-zero`
多次登顶 `GitHub Go` 语言趋势榜,海外用户期望我们能用英文维护 issues 和 PR他们也想参与社区目前这事我还得想办法继续推进。
`go-zero` 一年万星后,我对其有两方面规划:
更多的投入到代码本身,让 go-zero 更简单易用,开发效率更高
加强生态建设,联动微服务链路上下游顶级项目共建生态
期望大家多多关注、使用 go-zero并让我们听到你的声音star, issue, PR也可以加入 go-zero 社区,~7000人的社区可以帮你解决的不只是 go-zero 的使用问题。
![Docusaurus Plushie](./img/quxian.jpeg)
除了 go-zero 一如既往的稳步发展现在13.4k stars100位 contributors之外我还开源了多个 Go 项目:
https://github.com/kevwan/go-stash - 超快的轻量级 Logstash 替代方案
https://github.com/zeromicro/go-queue - 基于 Kafka, Beanstalkd 的延迟任务以及 Pub/Sub 系统
https://github.com/kevwan/chatbot - 超快的 pychatter 替代方案,构建自己的聊天机器人、简易智能客服
后续还会放出更多藏货,需要抽时间整理,敬请关注我的 GitHub: https://github.com/kevwan
### 工作变动
虽然我想尽力发展好晓黑板但无奈“双减”对行业影响过大我还是在11月从教育行业出来了虽然无悔但确确实实感受到了政策对一个行业的无可比拟的影响力。
对于下一个选择,确实让我思考了很久。
首先我给自己一个清晰的定位尽可能投身技术减少管理比重因为这是我的热爱虽然我早就过了很多人顾虑的35岁转行年龄。不管年龄多少做自己热爱的事情才会有激情
其次尽可能往toD面向开发者业务型的公司走因为我觉得我还是比较喜欢跟开发者打交道的而且我自己也做了很多的技术线对开发者深层需求还是比较理解的。
所以,最终我选择了七牛云,负责基础架构,同时我也会花不少时间去跟客户(技术人)交流,更好地理解云厂商客户的各种业务场景,这样不光对公司,对 go-zero 也是有很大好处的。同时,我也会投入不少精力来做 Go+,而开发 Go+ 又会让我更深入的理解 Go 的各种工程化的牛逼设计,真的叹为观止!如果你也想对 Go 语言深入理解的话,也可以考虑参与 Go+ 开源项目。其实吧,最大的收获是:老许(许式伟)会帮你 review 代码!
![Docusaurus Plushie](./img/qiniu.jpeg)
### 2022年展望
期待我能协助七牛云的基础架构和技术体系化建设再上一个新台阶
2022年希望 go-zero 能越来越好用,希望生态建设能有阶段性成果
定一个可量化的目标2022年底到达两万星
### 致谢
感谢好未来 & 晓黑板共事过的同事对我一直的支持和帮助
感谢七牛云同事对我新入职各项事宜的热心协助
感谢各个技术 & 开源社区大家的一直陪伴
感谢 go-zero 社区广大小伙伴,参与各种 go-zero 问题和设计的讨论,助力 go-zero 飞速发展
### 项目地址
https://github.com/zeromicro/go-zero
欢迎使用 go-zero 并 star 支持我们!

BIN
website/blog/img/flag.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

BIN
website/blog/img/qiniu.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

BIN
website/blog/img/star.jpeg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 146 KiB

View File

@ -0,0 +1,4 @@
{
"label": "构建工具",
"position": 2
}

View File

@ -0,0 +1,762 @@
---
sidebar_position: 2
---
# api语法
## api示例
```go
/**
* api语法示例及语法说明
*/
// api语法版本
syntax = "v1"
// import literal
import "foo.api"
// import group
import (
"bar.api"
"foo/bar.api"
)
info(
author: "songmeizi"
date: "2020-01-08"
desc: "api语法示例及语法说明"
)
// type literal
type Foo{
Foo int `json:"foo"`
}
// type group
type(
Bar{
Bar int `json:"bar"`
}
)
// service block
@server(
jwt: Auth
group: foo
)
service foo-api{
@doc "foo"
@handler foo
post /foo (Foo) returns (Bar)
}
```
## api语法结构
* syntax语法声明
* import语法块
* info语法块
* type语法块
* service语法块
* 隐藏通道
:::tip
在以上语法结构中,各个语法块从语法上来说,按照语法块为单位,可以在.api文件中任意位置声明
但是为了提高阅读效率,我们建议按照以上顺序进行声明,因为在将来可能会通过严格模式来控制语法块的顺序。
:::
### syntax语法声明
`syntax`是新加入的语法结构,该语法的引入可以解决:
* 快速针对api版本定位存在问题的语法结构
* 针对版本做语法解析
* 防止api语法大版本升级导致前后不能向前兼容
:::caution
被import的api必须要和main api的syntax版本一致。
:::
**语法定义**
```antlrv4
'syntax'={checkVersion(p)}STRING
```
**语法说明**
syntax固定token标志一个syntax语法结构的开始
checkVersion自定义go方法检测`STRING`是否为一个合法的版本号目前检测逻辑为STRING必须是满足`(?m)"v[1-9][0-9]*"`正则。
STRING一串英文双引号包裹的字符串如"v1"
一个api语法文件只能有0或者1个syntax语法声明如果没有syntax则默认为v1版本
**正确语法示例** ✅
eg1不规范写法
```api
syntax="v1"
```
eg2规范写法(推荐)
```api
syntax = "v2"
```
**错误语法示例** ❌
eg1
```api
syntax = "v0"
```
eg2
```api
syntax = v1
```
eg3
```api
syntax = "V1"
```
## import语法块
随着业务规模增大api中定义的结构体和服务越来越多所有的语法描述均为一个api文件这是多么糟糕的一个问题 其会大大增加了阅读难度和维护难度import语法块可以帮助我们解决这个问题通过拆分api文件
不同的api文件按照一定规则声明可以降低阅读难度和维护难度。
:::caution
这里import不像golang那样包含package声明仅仅是一个文件路径的引入最终解析后会把所有的声明都汇聚到一个spec.Spec中。
不能import多个相同路径否则会解析错误。
:::
**语法定义**
```antlrv4
'import' {checkImportValue(p)}STRING
|'import' '(' ({checkImportValue(p)}STRING)+ ')'
```
**语法说明**
import固定token标志一个import语法的开始
checkImportValue自定义go方法检测`STRING`是否为一个合法的文件路径目前检测逻辑为STRING必须是满足`(?m)"(/?[a-zA-Z0-9_#-])+\.api"`正则。
STRING一串英文双引号包裹的字符串如"foo.api"
**正确语法示例** ✅
eg
```api
import "foo.api"
import "foo/bar.api"
import(
"bar.api"
"foo/bar/foo.api"
)
```
**错误语法示例** ❌
eg
```api
import foo.api
import "foo.txt"
import (
bar.api
bar.api
)
```
## info语法块
info语法块是一个包含了多个键值对的语法体其作用相当于一个api服务的描述解析器会将其映射到spec.Spec中 以备用于翻译成其他语言(golang、java等)
时需要携带的meta元素。如果仅仅是对当前api的一个说明而不考虑其翻译 时传递到其他语言则使用简单的多行注释或者java风格的文档注释即可关于注释说明请参考下文的 **隐藏通道**
:::caution
不能使用重复的key每个api文件只能有0或者1个info语法块
:::
**语法定义**
```antlrv4
'info' '(' (ID {checkKeyValue(p)}VALUE)+ ')'
```
**语法说明**
info固定token标志一个info语法块的开始
checkKeyValue自定义go方法检测`VALUE`是否为一个合法值。
VALUEkey对应的值可以为单行的除'\r','\n','/'后的任意字符,多行请以""包裹,不过强烈建议所有都以""包裹
**正确语法示例** ✅
eg1不规范写法
```api
info(
foo: foo value
bar:"bar value"
desc:"long long long long
long long text"
)
```
eg2规范写法(推荐)
```api
info(
foo: "foo value"
bar: "bar value"
desc: "long long long long long long text"
)
```
**错误语法示例** ❌
eg1没有key-value内容
```api
info()
```
eg2不包含冒号
```api
info(
foo value
)
```
eg3key-value没有换行
```api
info(foo:"value")
```
eg4没有key
```api
info(
: "value"
)
```
eg5非法的key
```api
info(
12: "value"
)
```
eg6移除旧版本多行语法
```api
info(
foo: >
some text
<
)
```
## type语法块
在api服务中我们需要用到一个结构体(类)来作为请求体,响应体的载体,因此我们需要声明一些结构体来完成这件事情, type语法块由golang的type演变而来当然也保留着一些golang type的特性沿用golang特性有
* 保留了golang内置数据类型`bool`,`int`,`int8`,`int16`,`int32`,`int64`,`uint`,`uint8`,`uint16`,`uint32`,`uint64`,`uintptr`
,`float32`,`float64`,`complex64`,`complex128`,`string`,`byte`,`rune`,
* 兼容golang struct风格声明
* 保留golang关键字
:::caution
* 不支持alias
* 不支持time.Time数据类型
* 结构体名称、字段名称、不能为golang关键字
:::
**语法定义**
由于其和golang相似因此不做详细说明具体语法定义请在 [ApiParser.g4](https://github.com/zeromicro/go-zero/blob/master/tools/goctl/api/parser/g4/ApiParser.g4) 中查看typeSpec定义。
**语法说明**
参考golang写法
**正确语法示例** ✅
eg1不规范写法
```api
type Foo struct{
Id int `path:"id"` // ①
Foo int `json:"foo"`
}
type Bar struct{
// 非导出型字段
bar int `form:"bar"`
}
type(
// 非导出型结构体
fooBar struct{
FooBar int `json:"fooBar"`
}
)
```
eg2规范写法推荐
```api
type Foo{
Id int `path:"id"`
Foo int `json:"foo"`
}
type Bar{
Bar int `form:"bar"`
}
type(
FooBar{
FooBar int `json:"fooBar"`
}
)
```
**错误语法示例** ❌
eg
```api
type Gender int // 不支持
// 非struct token
type Foo structure{
CreateTime time.Time // 不支持time.Time且没有声明 tag
}
// golang关键字 var
type var{}
type Foo{
// golang关键字 interface
Foo interface // 没有声明 tag
}
type Foo{
foo int
// map key必须要golang内置数据类型且没有声明 tag
m map[Bar]string
}
```
:::tip
tag定义和golang中json tag语法一样除了json tag外go-zero还提供了另外一些tag来实现对字段的描述
详情见下表。
:::
* tag表
<table>
<tr>
<td>tag key</td> <td>描述</td> <td>提供方</td><td>有效范围 </td> <td>示例 </td>
</tr>
<tr>
<td>json</td> <td>json序列化tag</td> <td>golang</td> <td>request、response</td> <td><code>json:"fooo"</code></td>
</tr>
<tr>
<td>path</td> <td>路由path<code>/foo/:id</code></td> <td>go-zero</td> <td>request</td> <td><code>path:"id"</code></td>
</tr>
<tr>
<td>form</td> <td>标志请求体是一个formPOST方法时或者一个query(GET方法时<code>/search?name=keyword</code>)</td> <td>go-zero</td> <td>request</td> <td><code>form:"name"</code></td>
</tr>
<tr>
<td>header</td> <td>HTTP header<code>Name: value</code></td> <td>go-zero</td> <td>request</td> <td><code>header:"name"</code></td>
</tr>
</table>
* tag修饰符
常见参数校验描述
<table>
<tr>
<td>tag key </td> <td>描述 </td> <td>提供方 </td> <td>有效范围 </td> <td>示例 </td>
</tr>
<tr>
<td>optional</td> <td>定义当前字段为可选参数</td> <td>go-zero</td> <td>request</td> <td><code>json:"name,optional"</code></td>
</tr>
<tr>
<td>options</td> <td>定义当前字段的枚举值,多个以竖线|隔开</td> <td>go-zero</td> <td>request</td> <td><code>json:"gender,options=male"</code></td>
</tr>
<tr>
<td>default</td> <td>定义当前字段默认值</td> <td>go-zero</td> <td>request</td> <td><code>json:"gender,default=male"</code></td>
</tr>
<tr>
<td>range</td> <td>定义当前字段数值范围</td> <td>go-zero</td> <td>request</td> <td><code>json:"age,range=[0:120]"</code></td>
</tr>
</table>
:::tip
tag修饰符需要在tag value后以引文逗号,隔开
:::
## service语法块
service语法块用于定义api服务包含服务名称服务metadata中间件声明路由handler等。
:::caution
* main api和被import的api服务名称必须一致不能出现服务名称歧义。
* handler名称不能重复
* 路由(请求方法+请求path名称不能重复
* 请求体必须声明为普通非指针struct响应体做了一些向前兼容处理详请见下文说明
:::
**语法定义**
```antlrv4
serviceSpec: atServer? serviceApi;
atServer: '@server' lp='(' kvLit+ rp=')';
serviceApi: {match(p,"service")}serviceToken=ID serviceName lbrace='{' serviceRoute* rbrace='}';
serviceRoute: atDoc? (atServer|atHandler) route;
atDoc: '@doc' lp='('? ((kvLit+)|STRING) rp=')'?;
atHandler: '@handler' ID;
route: {checkHttpMethod(p)}httpMethod=ID path request=body? returnToken=ID? response=replybody?;
body: lp='(' (ID)? rp=')';
replybody: lp='(' dataType? rp=')';
// kv
kvLit: key=ID {checkKeyValue(p)}value=LINE_VALUE;
serviceName: (ID '-'?)+;
path: (('/' (ID ('-' ID)*))|('/:' (ID ('-' ID)?)))+;
```
**语法说明**
serviceSpec包含了一个可选语法块`atServer`和`serviceApi`语法块其遵循序列模式编写service必须要按照顺序否则会解析出错
atServer 可选语法块定义key-value结构的server metadata'@server'
表示这一个server语法块的开始其可以用于描述serviceApi或者route语法块其用于描述不同语法块时有一些特殊关键key 需要值得注意,见 **atServer关键key描述说明**
serviceApi包含了1到多个`serviceRoute`语法块
serviceRoute按照序列模式包含了`atDoc`,handler和`route`
atDoc可选语法块一个路由的key-value描述其在解析后会传递到spec.Spec结构体如果不关心传递到spec.Spec, 推荐用单行注释替代。
handler是对路由的handler层描述可以通过atServer指定`handler` key来指定handler名称 也可以直接用atHandler语法块来定义handler名称
atHandler'@handler' 固定token后接一个遵循正则`[_a-zA-Z][a-zA-Z_-]*`)的值用于声明一个handler名称
route路由有`httpMethod`、`path`、可选`request`、可选`response`组成,`httpMethod`是必须是小写。
bodyapi请求体语法定义必须要由()包裹的可选的ID值
replyBodyapi响应体语法定义必须由()包裹的struct、~~array(向前兼容处理后续可能会废弃强烈推荐以struct包裹不要直接用array作为响应体)~~
kvLit 同info key-value
serviceName: 可以有多个'-'join的ID值
pathapi请求路径必须以'/'或者'/:'开头,切不能以'/'结尾中间可包含ID或者多个以'-'join的ID字符串
**atServer关键key描述说明**
修饰service时
<table>
<tr>
<td>key</td><td>描述</td><td>示例</td>
</tr>
<tr>
<td>jwt</td><td>声明当前service下所有路由需要jwt鉴权且会自动生成包含jwt逻辑的代码</td><td><code>jwt: Auth</code></td>
</tr>
<tr>
<td>group</td><td>声明当前service或者路由文件分组</td><td><code>group: login</code></td>
</tr>
<tr>
<td>middleware</td><td>声明当前service需要开启中间件</td><td><code>middleware: AuthMiddleware</code></td>
</tr>
<tr>
<td>prefix</td><td>添加路由分组</td><td><code>prefix: /api</code></td>
</tr>
</table>
修饰route时
<table>
<tr>
<td>key</td><td>描述</td><td>示例</td>
</tr>
<tr>
<td>handler</td><td>声明一个handler</td><td>-</td>
</tr>
</table>
**正确语法示例** ✅
eg1不规范写法
```api
@server(
jwt: Auth
group: foo
middleware: AuthMiddleware
prefix /api
)
service foo-api{
@doc(
summary: foo
)
@server(
handler: foo
)
// 非导出型body
post /foo/:id (foo) returns (bar)
@doc "bar"
@handler bar
post /bar returns ([]int)// 不推荐数组作为响应体
@handler fooBar
post /foo/bar (Foo) returns // 可以省略'returns'
}
```
eg2规范写法推荐
```api
@server(
jwt: Auth
group: foo
middleware: AuthMiddleware
prefix: /api
)
service foo-api{
@doc "foo"
@handler foo
post /foo/:id (Foo) returns (Bar)
}
service foo-api{
@handler ping
get /ping
@doc "foo"
@handler bar
post /bar/:id (Foo)
}
```
**错误语法示例** ❌
```api
// 不支持空的server语法块
@server(
)
// 不支持空的service语法块
service foo-api{
}
service foo-api{
@doc kkkk // 简版doc必须用英文双引号引起来
@handler foo
post /foo
@handler foo // 重复的handler
post /bar
@handler fooBar
post /bar // 重复的路由
// @handler和@doc顺序错误
@handler someHandler
@doc "some doc"
post /some/path
// handler缺失
post /some/path/:id
@handler reqTest
post /foo/req (*Foo) // 不支持除普通结构体外的其他数据类型作为请求体
@handler replyTest
post /foo/reply returns (*Foo) // 不支持除普通结构体、数组(向前兼容,后续考虑废弃)外的其他数据类型作为响应体
}
```
## 隐藏通道
隐藏通道目前主要为空白符号、换行符号以及注释,这里我们只说注释,因为空白符号和换行符号我们目前拿来也无用。
### 单行注释
**语法定义**
```antlrv4
'//' ~[\r\n]*
```
**语法说明**
由语法定义可知道,单行注释必须要以`//`开头,内容为不能包含换行符
**正确语法示例** ✅
```api
// doc
// comment
```
**错误语法示例** ❌
```api
// break
line comments
```
### java风格文档注释
**语法定义**
```antlrv4
'/*' .*? '*/'
```
**语法说明**
由语法定义可知道,单行注释必须要以`/*`开头,`*/`结尾的任意字符。
**正确语法示例** ✅
```api
/**
* java-style doc
*/
```
**错误语法示例** ❌
```api
/*
* java-style doc */
*/
```
## Doc&Comment
如果想获取某一个元素的doc或者comment开发人员需要怎么定义
**Doc**
我们规定上一个语法块非隐藏通道内容的行数line+1到当前语法块第一个元素前的所有注释(单行,或者多行)均为doc 且保留了`//`、`/*`、`*/`原始标记。
**Comment**
我们规定当前语法块最后一个元素所在行开始的一个注释块(当行,或者多行)为comment 且保留了`//`、`/*`、`*/`原始标记。
语法块Doc和Comment的支持情况
<table>
<tr>
<td>语法块</td><td>parent语法块</td><td>Doc</td><td>Comment</td>
</tr>
<tr>
<td>syntaxLit</td><td>api</td><td></td><td></td>
</tr>
<tr>
<td>kvLit</td><td>infoSpec</td><td></td><td></td>
</tr>
<tr>
<td>importLit</td><td>importSpec</td><td></td><td></td>
</tr>
<tr>
<td>typeLit</td><td>api</td><td></td><td></td>
</tr>
<tr>
<td>typeLit</td><td>typeBlock</td><td></td><td></td>
</tr>
<tr>
<td>field</td><td>typeLit</td><td></td><td></td>
</tr>
<tr>
<td>key-value</td><td>atServer</td><td></td><td></td>
</tr>
<tr>
<td>atHandler</td><td>serviceRoute</td><td></td><td></td>
</tr>
<tr>
<td>route</td><td>serviceRoute</td><td></td><td></td>
</tr>
</table>
以下为对应语法块解析后细带doc和comment的写法
```api
// syntaxLit doc
syntax = "v1" // syntaxLit commnet
info(
// kvLit doc
author: songmeizi // kvLit comment
)
// typeLit doc
type Foo {}
type(
// typeLit doc
Bar{}
FooBar{
// filed doc
Name int // filed comment
}
)
@server(
/**
* kvLit doc
* 开启jwt鉴权
*/
jwt: Auth /**kvLit comment*/
)
service foo-api{
// atHandler doc
@handler foo //atHandler comment
/*
* route doc
* post请求
* path为 /foo
* 请求体Foo
* 响应体Foo
*/
post /foo (Foo) returns (Foo) // route comment
}
```

View File

@ -0,0 +1,71 @@
---
sidebar_position: 2
---
# 构建API
`goctl api`是`goctl`中的核心模块之一,其可以通过.api文件一键快速生成一个`api`服务,如果仅仅是启动一个`go-zero`的`api`演示项目, 你甚至都不用编码就可以完成一个api服务开发及正常运行。在传统的api项目中我们要创建各级目录编写结构体 定义路由,添加`logic`文件这一系列操作如果按照一条协议的业务需求计算整个编码下来大概需要56分钟才能真正进入业务逻辑的编写 这还不考虑编写过程中可能产生的各种错误,而随着服务的增多,随着协议的增多,这部分准备工作的时间将成正比上升, 而`goctl api`则可以完全替代你去做这一部分工作不管你的协议要定多少个最终来说只需要花费10秒不到即可完成。
:::tip
其中的结构体编写路由定义用api进行替代因此总的来说省去的是你创建文件夹、添加各种文件及资源依赖的过程的时间。
:::
### api命令说明
```shell
$ goctl api -h
```
```text
NAME:
goctl api - generate api related files
USAGE:
goctl api command [command options] [arguments...]
COMMANDS:
new fast create api service
format format api files
validate validate api file
doc generate doc files
go generate go files for provided api in yaml file
java generate java files for provided api in api file
ts generate ts files for provided api in api file
dart generate dart files for provided api in api file
kt generate kotlin code for provided api file
plugin custom file generator
OPTIONS:
-o value the output api file
--help, -h show help
```
从上面可以看到根据功能的不同api包含了很多的自命令和flag我们这里重点说明一下
`go`子命令其功能是生成golang api服务我们通过`goctl api go -h`看一下使用帮助:
```shell
$ goctl api go -h
```
```text
NAME:
goctl api go - generate go files for provided api in yaml file
USAGE:
goctl api go [command options] [arguments...]
OPTIONS:
--dir value the target dir
--api value the api file
--style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
```
* --dir 代码输出目录
* --api 指定api源文件
* --style 指定生成代码文件的文件名称风格,详情见[文件名称命名style说明](https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md)
### 使用示例
```shell
$ goctl api go -api user.api -dir . -style gozero
```

View File

@ -0,0 +1,379 @@
---
sidebar_position: 4
---
# 构建Model
`goctl model` 为`go-zero`下的工具模块中的组件之一,目前支持识别`mysql ddl`进行`model`层代码生成,通过命令行或者`idea`插件(即将支持)可以有选择地生成带`redis cache`或者不带`redis cache`的代码逻辑。
## 快速开始
* 通过ddl生成
```shell
$ goctl model mysql ddl -src="./*.sql" -dir="./sql/model" -c
```
执行上述命令后即可快速生成CURD代码。
```text
model
│   ├── error.go
│   └── usermodel.go
```
* 通过datasource生成
```shell
$ goctl model mysql datasource -url="user:password@tcp(127.0.0.1:3306)/database" -table="*" -dir="./model"
```
* 生成代码示例
```go
package model
import (
"database/sql"
"fmt"
"strings"
"time"
"github.com/tal-tech/go-zero/core/stores/cache"
"github.com/tal-tech/go-zero/core/stores/sqlc"
"github.com/tal-tech/go-zero/core/stores/sqlx"
"github.com/tal-tech/go-zero/core/stringx"
"github.com/tal-tech/go-zero/tools/goctl/model/sql/builderx"
)
var (
userFieldNames = builderx.RawFieldNames(&User{})
userRows = strings.Join(userFieldNames, ",")
userRowsExpectAutoSet = strings.Join(stringx.Remove(userFieldNames, "`id`", "`create_time`", "`update_time`"), ",")
userRowsWithPlaceHolder = strings.Join(stringx.Remove(userFieldNames, "`id`", "`create_time`", "`update_time`"), "=?,") + "=?"
cacheUserNamePrefix = "cache#User#name#"
cacheUserMobilePrefix = "cache#User#mobile#"
cacheUserIdPrefix = "cache#User#id#"
cacheUserPrefix = "cache#User#user#"
)
type (
UserModel interface {
Insert(data User) (sql.Result, error)
FindOne(id int64) (*User, error)
FindOneByUser(user string) (*User, error)
FindOneByName(name string) (*User, error)
FindOneByMobile(mobile string) (*User, error)
Update(data User) error
Delete(id int64) error
}
defaultUserModel struct {
sqlc.CachedConn
table string
}
User struct {
Id int64 `db:"id"`
User string `db:"user"` // 用户
Name string `db:"name"` // 用户名称
Password string `db:"password"` // 用户密码
Mobile string `db:"mobile"` // 手机号
Gender string `db:"gender"` // 男|女|未公开
Nickname string `db:"nickname"` // 用户昵称
CreateTime time.Time `db:"create_time"`
UpdateTime time.Time `db:"update_time"`
}
)
func NewUserModel(conn sqlx.SqlConn, c cache.CacheConf) UserModel {
return &defaultUserModel{
CachedConn: sqlc.NewConn(conn, c),
table: "`user`",
}
}
func (m *defaultUserModel) Insert(data User) (sql.Result, error) {
userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, data.Name)
userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, data.Mobile)
userKey := fmt.Sprintf("%s%v", cacheUserPrefix, data.User)
ret, err := m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
query := fmt.Sprintf("insert into %s (%s) values (?, ?, ?, ?, ?, ?)", m.table, userRowsExpectAutoSet)
return conn.Exec(query, data.User, data.Name, data.Password, data.Mobile, data.Gender, data.Nickname)
}, userNameKey, userMobileKey, userKey)
return ret, err
}
func (m *defaultUserModel) FindOne(id int64) (*User, error) {
userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, id)
var resp User
err := m.QueryRow(&resp, userIdKey, func(conn sqlx.SqlConn, v interface{}) error {
query := fmt.Sprintf("select %s from %s where `id` = ? limit 1", userRows, m.table)
return conn.QueryRow(v, query, id)
})
switch err {
case nil:
return &resp, nil
case sqlc.ErrNotFound:
return nil, ErrNotFound
default:
return nil, err
}
}
func (m *defaultUserModel) FindOneByUser(user string) (*User, error) {
userKey := fmt.Sprintf("%s%v", cacheUserPrefix, user)
var resp User
err := m.QueryRowIndex(&resp, userKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
query := fmt.Sprintf("select %s from %s where `user` = ? limit 1", userRows, m.table)
if err := conn.QueryRow(&resp, query, user); err != nil {
return nil, err
}
return resp.Id, nil
}, m.queryPrimary)
switch err {
case nil:
return &resp, nil
case sqlc.ErrNotFound:
return nil, ErrNotFound
default:
return nil, err
}
}
func (m *defaultUserModel) FindOneByName(name string) (*User, error) {
userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, name)
var resp User
err := m.QueryRowIndex(&resp, userNameKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
query := fmt.Sprintf("select %s from %s where `name` = ? limit 1", userRows, m.table)
if err := conn.QueryRow(&resp, query, name); err != nil {
return nil, err
}
return resp.Id, nil
}, m.queryPrimary)
switch err {
case nil:
return &resp, nil
case sqlc.ErrNotFound:
return nil, ErrNotFound
default:
return nil, err
}
}
func (m *defaultUserModel) FindOneByMobile(mobile string) (*User, error) {
userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, mobile)
var resp User
err := m.QueryRowIndex(&resp, userMobileKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
query := fmt.Sprintf("select %s from %s where `mobile` = ? limit 1", userRows, m.table)
if err := conn.QueryRow(&resp, query, mobile); err != nil {
return nil, err
}
return resp.Id, nil
}, m.queryPrimary)
switch err {
case nil:
return &resp, nil
case sqlc.ErrNotFound:
return nil, ErrNotFound
default:
return nil, err
}
}
func (m *defaultUserModel) Update(data User) error {
userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, data.Id)
_, err := m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
query := fmt.Sprintf("update %s set %s where `id` = ?", m.table, userRowsWithPlaceHolder)
return conn.Exec(query, data.User, data.Name, data.Password, data.Mobile, data.Gender, data.Nickname, data.Id)
}, userIdKey)
return err
}
func (m *defaultUserModel) Delete(id int64) error {
data, err := m.FindOne(id)
if err != nil {
return err
}
userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, data.Name)
userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, data.Mobile)
userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, id)
userKey := fmt.Sprintf("%s%v", cacheUserPrefix, data.User)
_, err = m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
query := fmt.Sprintf("delete from %s where `id` = ?", m.table)
return conn.Exec(query, id)
}, userNameKey, userMobileKey, userIdKey, userKey)
return err
}
func (m *defaultUserModel) formatPrimary(primary interface{}) string {
return fmt.Sprintf("%s%v", cacheUserIdPrefix, primary)
}
func (m *defaultUserModel) queryPrimary(conn sqlx.SqlConn, v, primary interface{}) error {
query := fmt.Sprintf("select %s from %s where `id` = ? limit 1", userRows, m.table)
return conn.QueryRow(v, query, primary)
}
```
## 用法
```text
$ goctl model mysql -h
```
```text
NAME:
goctl model mysql - generate mysql model"
USAGE:
goctl model mysql command [command options] [arguments...]
COMMANDS:
ddl generate mysql model from ddl"
datasource generate model from datasource"
OPTIONS:
--help, -h show help
```
## 生成规则
* 默认规则
我们默认用户在建表时会创建createTime、updateTime字段(忽略大小写、下划线命名风格)且默认值均为`CURRENT_TIMESTAMP`而updateTime支持`ON UPDATE CURRENT_TIMESTAMP`,对于这两个字段生成`insert`、`update`时会被移除,不在赋值范畴内,当然,如果你不需要这两个字段那也无大碍。
* 带缓存模式
* ddl
```shell
$ goctl model mysql -src={patterns} -dir={dir} -cache
```
help
```
NAME:
goctl model mysql ddl - generate mysql model from ddl
USAGE:
goctl model mysql ddl [command options] [arguments...]
OPTIONS:
--src value, -s value the path or path globbing patterns of the ddl
--dir value, -d value the target dir
--style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
--cache, -c generate code with cache [optional]
--idea for idea plugin [optional]
```
* datasource
```shell
$ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir} -cache=true
```
help
```text
NAME:
goctl model mysql datasource - generate model from datasource
USAGE:
goctl model mysql datasource [command options] [arguments...]
OPTIONS:
--url value the data source of database,like "root:password@tcp(127.0.0.1:3306)/database
--table value, -t value the table or table globbing patterns in the database
--cache, -c generate code with cache [optional]
--dir value, -d value the target dir
--style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
--idea for idea plugin [optional]
```
:::tip
goctl model mysql ddl/datasource 均新增了一个`--style`参数,用于标记文件命名风格。
:::
目前仅支持redis缓存如果选择带缓存模式即生成的`FindOne(ByXxx)`&`Delete`代码会生成带缓存逻辑的代码目前仅支持单索引字段除全文索引外对于联合索引我们默认认为不需要带缓存且不属于通用型代码因此没有放在代码生成行列如example中user表中的`id`、`name`、`mobile`字段均属于单字段索引。
* 不带缓存模式
* ddl
```shell
$ goctl model -src={patterns} -dir={dir}
```
* datasource
```shell
$ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir}
```
or
* ddl
```shell
$ goctl model -src={patterns} -dir={dir}
```
* datasource
```shell
$ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir}
```
生成代码仅基本的CURD结构。
## 缓存
对于缓存这一块我选择用一问一答的形式进行罗列。我想这样能够更清晰的描述model中缓存的功能。
* 缓存会缓存哪些信息?
对于主键字段缓存,会缓存整个结构体信息,而对于单索引字段(除全文索引)则缓存主键字段值。
* 数据有更新(`update`)操作会清空缓存吗?
但仅清空主键缓存的信息why这里就不做详细赘述了。
* 为什么不按照单索引字段生成`updateByXxx`和`deleteByXxx`的代码?
理论上是没任何问题但是我们认为对于model层的数据操作均是以整个结构体为单位包括查询我不建议只查询某部分字段不反对否则我们的缓存就没有意义了。
* 为什么不支持`findPageLimit`、`findAll`这么模式代码生层?
目前我认为除了基本的CURD外其他的代码均属于<i>业务型</i>代码,这个我觉得开发人员根据业务需要进行编写更好。
# 类型转换规则
| mysql dataType | golang dataType | golang dataType(if null&&default null) |
|----------------|-----------------|----------------------------------------|
| bool | int64 | sql.NullInt64 |
| boolean | int64 | sql.NullInt64 |
| tinyint | int64 | sql.NullInt64 |
| smallint | int64 | sql.NullInt64 |
| mediumint | int64 | sql.NullInt64 |
| int | int64 | sql.NullInt64 |
| integer | int64 | sql.NullInt64 |
| bigint | int64 | sql.NullInt64 |
| float | float64 | sql.NullFloat64 |
| double | float64 | sql.NullFloat64 |
| decimal | float64 | sql.NullFloat64 |
| date | time.Time | sql.NullTime |
| datetime | time.Time | sql.NullTime |
| timestamp | time.Time | sql.NullTime |
| time | string | sql.NullString |
| year | time.Time | sql.NullInt64 |
| char | string | sql.NullString |
| varchar | string | sql.NullString |
| binary | string | sql.NullString |
| varbinary | string | sql.NullString |
| tinytext | string | sql.NullString |
| text | string | sql.NullString |
| mediumtext | string | sql.NullString |
| longtext | string | sql.NullString |
| enum | string | sql.NullString |
| set | string | sql.NullString |
| json | string | sql.NullString |

View File

@ -0,0 +1,65 @@
---
sidebar_position: 5
---
# 自定义插件
goctl支持针对api自定义插件那我怎么来自定义一个插件了来看看下面最终怎么使用的一个例子。
```go
$ goctl api plugin -p goctl-android="android -package com.tal" -api user.api -dir .
```
上面这个命令可以分解成如下几步:
* goctl 解析api文件
* goctl 将解析后的结构 ApiSpec 和参数传递给goctl-android可执行文件
* goctl-android 根据 ApiSpec 结构体自定义生成逻辑。
此命令前面部分 goctl api plugin -p 是固定参数goctl-android="android -package com.tal" 是plugin参数其中goctl-android是插件二进制文件android -package com.tal是插件的自定义参数-api user.api -dir .是goctl通用自定义参数。
## 怎么编写自定义插件?
go-zero框架中包含了一个很简单的自定义插件 demo代码如下
```go title="plugin.go"
package main
import (
"fmt"
"github.com/tal-tech/go-zero/tools/goctl/plugin"
)
func main() {
plugin, err := plugin.NewPlugin()
if err != nil {
panic(err)
}
if plugin.Api != nil {
fmt.Printf("api: %+v \n", plugin.Api)
}
fmt.Printf("dir: %s \n", plugin.Dir)
fmt.Println("Enjoy anything you want.")
}
```
`plugin, err := plugin.NewPlugin()` 这行代码作用是解析从goctl传递过来的数据里面包含如下部分内容
```go
type Plugin struct {
Api *spec.ApiSpec
Style string
Dir string
}
```
:::tip
Api定义了api文件的结构数据
Style可选参数可以用来控制文件命名规范
Dir工作目录
:::
完整的基于plugin实现的android plugin演示项目
[https://github.com/zeromicro/goctl-android](https://github.com/zeromicro/goctl-android)

View File

@ -0,0 +1,234 @@
---
sidebar_position: 3
---
# 构建RPC
`goctl rpc`是`goctl`脚手架下的一个rpc服务代码生成模块支持`proto`模板生成和`rpc`服务代码生成,通过此工具生成代码你只需要关注业务逻辑编写而不用去编写一些重复性的代码。这使得我们把精力重心放在业务上,从而加快了开发效率且降低了代码出错率。
## 特性
* 简单易用
* 快速提升开发效率
* 出错率低
* 贴近protoc
## 快速开始
### 方式一快速生成greet服务
通过命令 `goctl rpc new ${servieName}`生成
如生成greet rpc服务
```Bash
goctl rpc new greet
```
执行后代码结构如下:
```go
.
├── etc // yaml配置文件
│ └── greet.yaml
├── go.mod
├── greet // pb.go文件夹①
│ └── greet.pb.go
├── greet.go // main函数
├── greet.proto // proto 文件
├── greetclient // call logic ②
│ └── greet.go
└── internal
├── config // yaml配置对应的实体
│ └── config.go
├── logic // 业务代码
│ └── pinglogic.go
├── server // rpc server
│ └── greetserver.go
└── svc // 依赖资源
└── servicecontext.go
```
:::tip
pb文件夹名老版本文件夹固定为pb称取自于proto文件中option go_package的值最后一层级按照一定格式进行转换若无此声明则取自于package的值大致代码如下
:::
```go title="google.golang.org/protobuf@v1.25.0/internal/strs/strings.go:71"
if option.Name == "go_package" {
ret.GoPackage = option.Constant.Source
}
...
if len(ret.GoPackage) == 0 {
ret.GoPackage = ret.Package.Name
}
ret.PbPackage = 'GoSanitized(filepath.Base(ret.GoPackage))'
...
```
:::tip
call 层文件夹名称取自于proto中service的名称如该sercice的名称和pb文件夹名称相等则会在srervice后面补充client进行区分使pb和call分隔。
:::
```go
if strings.ToLower(proto.Service.Name) == strings.ToLower(proto.GoPackage) {
callDir = filepath.Join(ctx.WorkDir, strings.ToLower(stringx.From(proto.Service.Name+"_client").ToCamel()))
}
```
### 方式二通过指定proto生成rpc服务
* 生成proto模板
```Bash
goctl rpc template -o=user.proto
```
```go title="user.proto"
syntax = "proto3";
package remote;
option go_package = "remote";
message Request {
// 用户名
string username = 1;
// 用户密码
string password = 2;
}
message Response {
// 用户名称
string name = 1;
// 用户性别
string gender = 2;
}
service User {
// 登录
rpc Login(Request)returns(Response);
}
```
* 生成rpc服务代码
```Bash
goctl rpc proto -src user.proto -dir .
```
## 准备工作
* 安装了go环境
* 安装了protoc&protoc-gen-go并且已经设置环境变量
* 更多问题请见 <a href="#注意事项">注意事项</a>
## 用法
### rpc服务生成用法
```Bash
goctl rpc proto -h
```
```Bash
NAME:
goctl rpc proto - generate rpc from proto
USAGE:
goctl rpc proto [command options] [arguments...]
OPTIONS:
--src value, -s value the file path of the proto source file
--proto_path value, -I value native command of protoc, specify the directory in which to search for imports. [optional]
--dir value, -d value the target path of the code
--style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
--idea whether the command execution environment is from idea plugin. [optional]
```
### 参数说明
* --src 必填proto数据源目前暂时支持单个proto文件生成
* --proto_path 可选protoc原生子命令用于指定proto import从何处查找可指定多个路径,如`goctl rpc -I={path1} -I={path2} ...`,在没有import时可不填。当前proto路径不用指定已经内置`-I`的详细用法请参考`protoc -h`
* --dir 可选默认为proto文件所在目录生成代码的目标目录
* --style 可选输出目录的文件命名风格详情见https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md
* --idea 可选是否为idea插件中执行终端执行可以忽略
### 开发人员需要做什么
关注业务代码编写将重复性、与业务无关的工作交给goctl生成好rpc服务代码后开发人员仅需要修改
* 服务中的配置文件编写(etc/xx.json、internal/config/config.go)
* 服务中业务逻辑编写(internal/logic/xxlogic.go)
* 服务中资源上下文的编写(internal/svc/servicecontext.go)
### 注意事项
* proto暂不支持多文件同时生成
* proto不支持外部依赖包引入message不支持inline
* 目前main文件、shared文件、handler文件会被强制覆盖而和开发人员手动需要编写的则不会覆盖生成这一类在代码头部均有
``` shell
// Code generated by goctl. DO NOT EDIT!
// Source: xxx.proto
```
的标识,请注意不要将也写业务性代码写在里面。
## proto import
* 对于rpc中的requestType和returnType必须在main proto文件定义对于proto中的message可以像protoc一样import其他proto文件。
### 错误import
```protobuf title="greet.proto"
syntax = "proto3";
package greet;
option go_package = "greet";
import "base/common.proto"
message Request {
string ping = 1;
}
message Response {
string pong = 1;
}
service Greet {
rpc Ping(base.In) returns(base.Out);// request和return 不支持import
}
```
### 正确import
```protobuf title="greet.proto"
syntax = "proto3";
package greet;
option go_package = "greet";
import "base/common.proto"
message Request {
base.In in = 1;// 支持import
}
message Response {
base.Out out = 2;// 支持import
}
service Greet {
rpc Ping(Request) returns(Response);
}
```

View File

@ -0,0 +1,289 @@
---
sidebar_position: 6
---
# 模板管理
## 模板操作
模板Template是数据驱动生成的基础所有的代码rest api、rpc、model、docker、kube生成都会依赖模板
默认情况下,模板生成器会选择内存中的模板进行生成,而对于有模板修改需求的开发者来讲,则需要将模板进行落盘,
从而进行模板修改,在下次代码生成时会加载指定路径下的模板进行生成。
### 使用帮助
```text
NAME:
goctl template - template operation
USAGE:
goctl template command [command options] [arguments...]
COMMANDS:
init initialize the all templates(force update)
clean clean the all cache templates
update update template of the target category to the latest
revert revert the target template to the latest
OPTIONS:
--help, -h show help
```
### 模板初始化
```text
NAME:
goctl template init - initialize the all templates(force update)
USAGE:
goctl template init [command options] [arguments...]
OPTIONS:
--home value the goctl home path of the template
```
### 清除模板
```text
NAME:
goctl template clean - clean the all cache templates
USAGE:
goctl template clean [command options] [arguments...]
OPTIONS:
--home value the goctl home path of the template
```
### 回滚指定分类模板
```text
NAME:
goctl template update - update template of the target category to the latest
USAGE:
goctl template update [command options] [arguments...]
OPTIONS:
--category value, -c value the category of template, enum [api,rpc,model,docker,kube]
--home value the goctl home path of the template
```
### 回滚模板
```text
NAME:
goctl template revert - revert the target template to the latest
USAGE:
goctl template revert [command options] [arguments...]
OPTIONS:
--category value, -c value the category of template, enum [api,rpc,model,docker,kube]
--name value, -n value the target file name of template
--home value the goctl home path of the template
```
:::tip
`--home` 指定模板存储路径
:::
### 模板加载
在代码生成时可以通过`--home`来指定模板所在文件夹,目前已支持指定模板目录的命令有:
- `goctl api go` 详情可以通过`goctl api go --help`查看帮助
- `goctl docker` 详情可以通过`goctl docker --help`查看帮助
- `goctl kube` 详情可以通过`goctl kube --help`查看帮助
- `goctl rpc new` 详情可以通过`goctl rpc new --help`查看帮助
- `goctl rpc proto` 详情可以通过`goctl rpc proto --help`查看帮助
- `goctl model mysql ddl` 详情可以通过`goctl model mysql ddl --help`查看帮助
- `goctl model mysql datasource` 详情可以通过`goctl model mysql datasource --help`查看帮助
- `goctl model postgresql datasource` 详情可以通过`goctl model mysql datasource --help`查看帮助
- `goctl model mongo` 详情可以通过`goctl model mongo --help`查看帮助
默认情况(在不指定`--home`)会从`$HOME/.goctl`目录下读取。
### 使用示例
* 初始化模板到指定`$HOME/template`目录下
```text
$ goctl template init --home $HOME/template
```
```text
Templates are generated in /Users/anqiansong/template, edit on your risk!
```
* 使用`$HOME/template`模板进行greet rpc生成
```text
$ goctl rpc new greet --home $HOME/template
```
```text
Done
```
## 模板修改
### 场景
实现统一格式的body响应格式如下
```json
{
"code": 0,
"msg": "OK",
"data": {} // ①
}
```
① 实际响应数据
:::tip
`go-zero`生成的代码没有对其进行处理
:::
### 准备工作
我们提前在`module`为`greet`的工程下的`response`包中写一个`Response`方法,目录树类似如下:
```text
greet
├── response
│   └── response.go
└── xxx...
```
代码如下
```go
package response
import (
"net/http"
"github.com/tal-tech/go-zero/rest/httpx"
)
type Body struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data interface{} `json:"data,omitempty"`
}
func Response(w http.ResponseWriter, resp interface{}, err error) {
var body Body
if err != nil {
body.Code = -1
body.Msg = err.Error()
} else {
body.Msg = "OK"
body.Data = resp
}
httpx.OkJson(w, body)
}
```
### 修改handler模板
```shell
$ vim ~/.goctl/api/handler.tpl
```
将模板替换为以下内容
```go
package handler
import (
"net/http"
"greet/response"// ①
{% raw %}
{{.ImportPackages}}
{% endraw %}
)
{% raw %}
func {{.HandlerName}}(ctx *svc.ServiceContext) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
{{if .HasRequest}}var req types.{{.RequestType}}
if err := httpx.Parse(r, &req); err != nil {
httpx.Error(w, err)
return
}{{end}}
l := logic.New{{.LogicType}}(r.Context(), ctx)
{{if .HasResp}}resp, {{end}}err := l.{{.Call}}({{if .HasRequest}}req{{end}})
{{if .HasResp}}response.Response(w, resp, err){{else}}response.Response(w, nil, err){{end}}//②
}
}
{% endraw %}
```
① 替换为你真实的`response`包名,仅供参考
② 自定义模板内容
:::tip
如果本地没有`~/.goctl/api/handler.tpl`文件,可以通过模板初始化命令`goctl template init`进行初始化
:::
### 修改模板前后对比
* 修改前
```go
func GreetHandler(ctx *svc.ServiceContext) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var req types.Request
if err := httpx.Parse(r, &req); err != nil {
httpx.Error(w, err)
return
}
l := logic.NewGreetLogic(r.Context(), ctx)
resp, err := l.Greet(req)
// 以下内容将被自定义模板替换
if err != nil {
httpx.Error(w, err)
} else {
httpx.OkJson(w, resp)
}
}
}
```
* 修改后
```go
func GreetHandler(ctx *svc.ServiceContext) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var req types.Request
if err := httpx.Parse(r, &req); err != nil {
httpx.Error(w, err)
return
}
l := logic.NewGreetLogic(r.Context(), ctx)
resp, err := l.Greet(req)
response.Response(w, resp, err)
}
}
```
### 修改模板前后响应体对比
* 修改前
```json
{
"message": "Hello go-zero!"
}
```
* 修改后
```json
{
"code": 0,
"msg": "OK",
"data": {
"message": "Hello go-zero!"
}
}
```
## 总结
本文档仅对http相应为例讲述了自定义模板的流程除此之外自定义模板的场景还有
* model 层添加kmq
* model 层生成待有效期option的model实例
* http自定义相应格式

View File

@ -0,0 +1,78 @@
---
sidebar_position: 1
---
# goctl介绍
`goctl` 读作 `go control`,不要读成 `go C-T-L`。`goctl` 的意思是不要被代码控制,而是要去控制它。其中的 `go` 不是指 `golang`。在设计 `goctl` 之初,我就希望通过 她 来解放我们的双手👈
### api 生成
| 名称 | 功能 | 示例 |
| --- | --- | --- |
| `-o` | 生成api文件 | `goctl api -o user.api` |
| `new` | 快速创建一个api服务 | `goctl api new user` |
| `format` | api格式化`vscode`使用 <br /> `-dir`目标目录 <br /> `-iu`是否自动更新goctl <br /> `-stdin`是否从标准输入读取数据 | |
| `validate` | 验证api文件是否有效 <br/> `-api` 指定api文件源 | `goctl api validate -api user.api` |
| `doc` | 生成doc markdown <br/> `-dir`指定目录 | `goctl api doc -dir user` |
| `go` | 生成golang api服务<br/>`-dir`指定生成代码目录<br/>`-api`指定api文件源<br/>`-force`是否强制覆盖已存在的文件<br/>`style`指定文件名命名风格gozero: 小写go_zero: 下划线,GoZero: 驼峰 | |
| `java` | 生成访问api服务代码-java语言<br/>`-dir`指定代码存放目录<br/>`-api`指定api文件源 | |
| `ts` | 生成访问api服务代码-ts语言<br/>`-dir`指定代码存放目录<br/>`-api`指定api文件源<br/>`webapi`<br/>`caller`<br/>`unwrap` | |
| `dart` | 生成访问api服务代码-dart语言<br/>`-dir`指定代码存放目录<br/>`-api`指定api文件源 | |
| `kt` | 生成访问api服务代码-kotlin语言<br/>`-dir`指定代码存放目录<br/>`-api`指定api文件源<br/>`pkg`指定包名 | |
| `plugin` | `-plugin`可执行文件<br/>`-dir`代码存放目标文件夹<br/>`-api`api源码文件<br/>`-style`文件名命名格式化 | |
### rpc 生成
| 名称 | 功能 | 示例 |
| --- | --- | --- |
| `new` | 快速生成一个rpc服务<br/>`-idea`标识命令是否来源于idea插件用于idea插件开发使用终端执行请忽略[可选参数]<br/>`-style`指定文件名命名风格gozero:小写go_zero:下划线,GoZero:驼峰 | |
| `template` | 创建一个proto模板文件<br/>`-idea`标识命令是否来源于idea插件用于idea插件开发使用终端执行请忽略[可选参数]<br/>`-out,o`指定代码存放目录 | |
| `proto` | 根据proto生成rpc服务<br/>`-src,s`指定proto文件源<br/>`-proto_path,I`指定proto import查找目录protoc原生命令具体用法可参考protoc -h查看<br/>`-dir,d`指定代码存放目录<br/>`-idea`标识命令是否来源于idea插件用于idea插件开发使用终端执行请忽略[可选参数]<br/>`-style`指定文件名命名风格gozero:小写go_zero:下划线,GoZero:驼峰 | |
| `model` | model层代码操作<br/><br/>`mysql`从mysql生成model代码<br/>&emsp;&emsp;`ddl`指定数据源为ddl文件生成model代码<br/>&emsp;&emsp;&emsp;&emsp;`-src,s`指定包含ddl的sql文件源支持通配符匹配<br/>&emsp;&emsp;&emsp;&emsp;`-dir,d`指定代码存放目录<br/>&emsp;&emsp;&emsp;&emsp;`-style`指定文件名命名风格gozero:小写go_zero:下划线,GoZero:驼峰<br/>&emsp;&emsp;&emsp;&emsp;`-cache,c`生成代码是否带redis缓存逻辑bool值<br/>&emsp;&emsp;&emsp;&emsp;`-idea`标识命令是否来源于idea插件用于idea插件开发使用终端执行请忽略[可选参数]<br/>&emsp;&emsp;`datasource`指定数据源从数据库链接生成model代码<br/>&emsp;&emsp;&emsp;&emsp;`-url`指定数据库链接<br/>&emsp;&emsp;&emsp;&emsp;`-table,t`指定表名,支持通配符<br/>&emsp;&emsp;&emsp;&emsp;`-dir,d`指定代码存放目录<br/>&emsp;&emsp;&emsp;&emsp;`-style`指定文件名命名风格gozero:小写go_zero:下划线,GoZero:驼峰<br/>&emsp;&emsp;&emsp;&emsp;`-cache,c`生成代码是否带redis缓存逻辑bool值<br/>&emsp;&emsp;&emsp;&emsp;`-idea`标识命令是否来源于idea插件用于idea插件开发使用终端执行请忽略[可选参数]<br/><br/>`mongo`从mongo生成model代码<br/>&emsp;&emsp;`-type,t`指定Go Type名称<br/>&emsp;&emsp;`-cache,c`生成代码是否带redis缓存逻辑bool值默认否<br/>&emsp;&emsp;`-dir,d`指定代码生成目录<br/>&emsp;&emsp;`-style`指定文件名命名风格gozero:小写go_zero:下划线,GoZero:驼峰 | |
### model 生成
| 名称 | 功能 | 示例 |
| --- | --- | --- |
| `mysql` | 从mysql生成model代码<br/>&emsp;&emsp;`ddl`指定数据源为ddl文件生成model代码<br/>&emsp;&emsp;&emsp;&emsp;`-src,s`指定包含ddl的sql文件源支持通配符匹配<br/>&emsp;&emsp;&emsp;&emsp;`-dir,d`指定代码存放目录<br/>&emsp;&emsp;&emsp;&emsp;`-style`指定文件名命名风格gozero:小写go_zero:下划线,GoZero:驼峰<br/>&emsp;&emsp;&emsp;&emsp;`-cache,c`生成代码是否带redis缓存逻辑bool值<br/>&emsp;&emsp;&emsp;&emsp;`-idea`标识命令是否来源于idea插件用于idea插件开发使用终端执行请忽略[可选参数]<br/>&emsp;&emsp;`datasource`指定数据源从数据库链接生成model代码<br/>&emsp;&emsp;&emsp;&emsp;`-url`指定数据库链接<br/>&emsp;&emsp;&emsp;&emsp;`-table,t`指定表名,支持通配符<br/>&emsp;&emsp;&emsp;&emsp;`-dir,d`指定代码存放目录<br/>&emsp;&emsp;&emsp;&emsp;`-style`指定文件名命名风格gozero:小写go_zero:下划线,GoZero:驼峰<br/>&emsp;&emsp;&emsp;&emsp;`-cache,c`生成代码是否带redis缓存逻辑bool值<br/>&emsp;&emsp;&emsp;&emsp;`-idea`标识命令是否来源于idea插件用于idea插件开发使用终端执行请忽略[可选参数] | |
| `mongo` | 从mongo生成model代码<br/>&emsp;&emsp;`-type,t`指定Go Type名称<br/>&emsp;&emsp;`-cache,c`生成代码是否带redis缓存逻辑bool值默认否<br/>&emsp;&emsp;`-dir,d`指定代码生成目录<br/>&emsp;&emsp;`-style`指定文件名命名风格gozero:小写go_zero:下划线,GoZero:驼峰 | |
### template 模板操作
| 名称 | 功能 | 示例 |
| --- | --- | --- |
| `init` | 存`api`/`rpc`/`model`模板 | `goctl template init` |
| `clean` | 清空缓存模板 | `goctl template clean` |
| `update` | 更新模板<br/>`-category,c`指定需要更新的分组名 `api`/`rpc`/`model` | `goctl template update -c api` |
| `revert` | 还原指定模板文件<br/>`-category,c`指定需要更新的分组名 `api`/`rpc`/`model`<br/>`-name,n`指定模板文件名 | |
### config 配置文件生成
| 名称 | 功能 | 示例 |
| --- | --- | --- |
| `-path,p` | 指定配置文件存放目录 | `goctl config -p user` |
### docker 生成Dockerfile
| 名称 | 功能 | 示例 |
| --- | --- | --- |
| `-go` | 指定main函数文件 | |
| `-port` | 指定暴露端口 | |
### upgrade goctl更新到最新版本
### kube 生成k8s部署文件
### deploy k8s deploymenet
| 名称 | 功能 | 示例 |
| --- | --- | --- |
| `-name` | 服务名称 | |
| `-namespace` | 指定k8s namespace | |
| `-image` | 指定镜像名称 | |
| `-secret` | 指定获取镜像的k8s secret | |
| `-requestCpu` | 指定cpu默认分配额 | |
| `-requestMem` | 指定内存默认分配额 | |
| `-limitCpu` | 指定cpu最大分配额 | |
| `-limitMem` | 指定内存最大分配额 | |
| `-o` | deployment.yaml输出目录 | |
| `-replicas` | 指定副本数 | |
| `-revisions` | 指定保留发布记录数 | |
| `-port` | 指定服务端口 | |
| `-nodePort` | 指定服务对外暴露端口 | |
| `-minReplicas` | 指定最小副本数 | |
| `-maxReplicas` | 指定最大副本数 | |

View File

@ -0,0 +1,4 @@
{
"label": "框架组件",
"position": 3
}

View File

@ -0,0 +1,262 @@
---
sidebar_position: 7
---
# 负载均衡
### 背景
在选择负载均衡算法时,我们希望满足以下要求:
- 具备分区和机房调度亲和性
- 每次选择的节点尽量是负载最低的
- 每次尽可能选择响应最快的节点
- 无需人工干预故障节点
- 当一个节点有故障时,负载均衡算法可以自动隔离该节点
- 当故障节点恢复时,能够自动恢复对该节点的流量分发
### 算法的核心思想
#### p2c
`p2c` (Pick Of 2 Choices) 二选一: 在多个节点中随机选择两个节点。
`go-zero` 中的会随机的选择3次如果其中一次选择的节点的健康条件满足要求就中断选择采用这两个节点。
#### EWMA
`EWMA` (Exponentially Weighted Moving-Average) 指数移动加权平均法: 是指各数值的加权系数随时间呈指数递减,越靠近当前时刻的数值加权系数就越大,体现了最近一段时间内的平均值。
- 公式:
![ewma](/img/ewma.png)
- 变量解释:
- Vt: 代表的是第 t 次请求的 EWMA值
- Vt-1: 代表的是第 t-1 次请求的 EWMA值
- β: 是一个常量
#### EWMA 算法的优势
- 相较于普通的计算平均值算法EWMA 不需要保存过去所有的数值,计算量显著减少,同时也减小了存储资源。
- 传统的计算平均值算法对网络耗时不敏感, 而 EWMA 可以通过请求频繁来调节 β,进而迅速监控到网络毛刺或更多的体现整体平均值。
- 当请求较为频繁时, 说明节点网络负载升高了, 我们想监测到此时节点处理请求的耗时(侧面反映了节点的负载情况), 我们就相应的调小β。β越小EWMA值 就越接近本次耗时,进而迅速监测到网络毛刺;
- 当请求较为不频繁时, 我们就相对的调大β值。这样计算出来的 EWMA值 越接近平均值
#### β计算
`go-zero` 采用的是牛顿冷却定律中的衰减函数模型计算 `EWMA` 算法中的 `β` 值:
![ewma](/img/β.png)
其中 `Δt` 为两次请求的间隔,`e``k` 为常数
### gRPC 中实现自定义负载均衡器
首先我们需要实现 google.golang.org/grpc/balancer/base/base.go/PickerBuilder 接口, 这个接口是有服务节点更新的时候会调用接口里的Build方法
```go title="grpc-go/balancer/base/base.go"
type PickerBuilder interface {
// Build returns a picker that will be used by gRPC to pick a SubConn.
Build(info PickerBuildInfo) balancer.Picker
}
```
还要实现 google.golang.org/grpc/balancer/balancer.go/Picker 接口。这个接口主要实现负载均衡,挑选一个节点供请求使用
```go title="grpc-go/balancer/balancer.go"
type Picker interface {
Pick(info PickInfo) (PickResult, error)
}
```
最后向负载均衡 map 中注册我们实现的负载均衡器
### go-zero 实现负载均衡的主要逻辑
- 在每次节点更新,`gRPC` 会调用 `Build` 方法,此时在 `Build` 里实现保存所有的节点信息。
- `gRPC` 在获取节点处理请求时,会调用 `Pick` 方法以获取节点。`go-zero` 在 `Pick` 方法里实现了 `p2c` 算法,挑选节点,并通过节点的 `EWMA` 值 计算负载情况,返回负载低的节点供 gRPC 使用。
- 在请求结束的时候 `gRPC` 会调用 `PickResult.Done` 方法,`go-zero` 在这个方法里实现了本次请求耗时等信息的存储,并计算出了 `EWMA` 值 保存了起来,供下次请求时计算负载等情况的使用。
### 负载均衡代码分析
#### 保存服务的所有节点信息
我们需要保存节点处理本次请求的耗时、`EWMA` 等信息,`go-zero` 给每个节点设计了如下结构:
```go title="go-zero/zrpc/internal/balancer/p2c/p2c.go"
type subConn struct {
addr resolver.Address
conn balancer.SubConn
lag uint64 // 用来保存 ewma 值
inflight int64 // 用在保存当前节点正在处理的请求总数
success uint64 // 用来标识一段时间内此连接的健康状态
requests int64 // 用来保存请求总数
last int64 // 用来保存上一次请求耗时, 用于计算 ewma 值
pick int64 // 保存上一次被选中的时间点
}
```
#### `p2cPicker` 实现了 `balancer.Picker` 接口,`conns` 保存了服务的所有节点信息
```go title="go-zero/zrpc/internal/balancer/p2c/p2c.go"
type p2cPicker struct {
conns []*subConn // 保存所有节点的信息
r *rand.Rand
stamp *syncx.AtomicDuration
lock sync.Mutex
}
```
#### `gRPC` 在节点有更新的时候会调用 `Build` 方法,传入所有节点信息,我们在这里把每个节点信息用 subConn 结构保存起来。并归并到一起用 `p2cPicker` 结构保存起来
```go title="go-zero/zrpc/internal/balancer/p2c/p2c.go:42"
func (b *p2cPickerBuilder) Build(info base.PickerBuildInfo) balancer.Picker {
......
var conns []*subConn
for conn, connInfo := range readySCs {
conns = append(conns, &subConn{
addr: connInfo.Address,
conn: conn,
success: initSuccess,
})
}
return &p2cPicker{
conns: conns,
r: rand.New(rand.NewSource(time.Now().UnixNano())),
stamp: syncx.NewAtomicDuration(),
}
}
```
#### 随机挑选节点信息,在这里分了三种情况:
主要实现代码如下:
```go title="go-zero/zrpc/internal/balancer/p2c/p2c.go:80"
switch len(p.conns) {
case 0: // 没有节点,返回错误
return emptyPickResult, balancer.ErrNoSubConnAvailable
case 1: // 有一个节点,直接返回这个节点
chosen = p.choose(p.conns[0], nil)
case 2: // 有两个节点,计算负载,返回负载低的节点
chosen = p.choose(p.conns[0], p.conns[1])
default: // 有多个节点p2c 挑选两个节点,比较这两个节点的负载,返回负载低的节点
var node1, node2 *subConn
// 3次随机选择两个节点
for i := 0; i < pickTimes; i++ {
a := p.r.Intn(len(p.conns))
b := p.r.Intn(len(p.conns) - 1)
if b >= a {
b++
}
node1 = p.conns[a]
node2 = p.conns[b]
// 如果这次选择的节点达到了健康要求, 就中断选择
if node1.healthy() && node2.healthy() {
break
}
}
// 比较两个节点的负载情况,选择负载低的
chosen = p.choose(node1, node2)
}
````
- 只有一个服务节点,此时直接返回供 gRPC 使用即可
- 有两个服务节点,通过 EWMA值 计算负载,并返回负载低的节点返回供 gRPC 使用
- 有多个服务节点,此时通过 p2c 算法选出两个节点,比较负载情况,返回负载低的节点供 gRPC 使用
#### `load`计算节点的负载情况
上面的 `choose` 方法会调用 `load` 方法来计算节点负载。
计算负载的公式是: `load = ewma * inflight`
在这里简单解释下:`ewma` 相当于平均请求耗时,`inflight` 是当前节点正在处理请求的数量,相乘大致计算出了当前节点的网络负载。
```
func (c *subConn) load() int64 {
// 通过 EWMA 计算节点的负载情况; 加 1 是为了避免为 0 的情况
lag := int64(math.Sqrt(float64(atomic.LoadUint64(&c.lag) + 1)))
load := lag * (atomic.LoadInt64(&c.inflight) + 1)
if load == 0 {
return penalty
}
return load
}
```
#### 请求结束,更新节点的 `EWMA` 等信息
```go
func (p *p2cPicker) buildDoneFunc(c *subConn) func(info balancer.DoneInfo) {
start := int64(timex.Now())
return func(info balancer.DoneInfo) {
// 正在处理的请求数减 1
atomic.AddInt64(&c.inflight, -1)
now := timex.Now()
// 保存本次请求结束时的时间点,并取出上次请求时的时间点
last := atomic.SwapInt64(&c.last, int64(now))
td := int64(now) - last
if td < 0 {
td = 0
}
// 用牛顿冷却定律中的衰减函数模型计算EWMA算法中的β值
w := math.Exp(float64(-td) / float64(decayTime))
// 保存本次请求的耗时
lag := int64(now) - start
if lag < 0 {
lag = 0
}
olag := atomic.LoadUint64(&c.lag)
if olag == 0 {
w = 0
}
// 计算 EWMA 值
atomic.StoreUint64(&c.lag, uint64(float64(olag)*w+float64(lag)*(1-w)))
success := initSuccess
if info.Err != nil && !codes.Acceptable(info.Err) {
success = 0
}
osucc := atomic.LoadUint64(&c.success)
atomic.StoreUint64(&c.success, uint64(float64(osucc)*w+float64(success)*(1-w)))
stamp := p.stamp.Load()
if now-stamp >= logInterval {
if p.stamp.CompareAndSwap(stamp, now) {
p.logStats()
}
}
}
}
```
- 把节点正在处理请求的总数减1
- 保存处理请求结束的时间点,用于计算距离上次节点处理请求的差值,并算出 EWMA 中的 β值
- 计算本次请求耗时,并计算出 EWMA值 保存到节点的 lag 属性里
- 计算节点的健康状态保存到节点的 success 属性中

View File

@ -0,0 +1,140 @@
---
sidebar_position: 5
---
# 熔断
### 熔断器原理
熔断机制其实是参考了我们日常生活中的保险丝的保护机制,当电路超负荷运行时,保险丝会自动的断开,从而保证电路中的电器不受损害。而服务治理中的熔断机制,指的是在发起服务调用的时候,如果被调用方返回的错误率超过一定的阈值,那么后续的请求将不会真正发起请求,而是在调用方直接返回错误
在这种模式下,服务调用方为每一个调用服务(调用路径)维护一个状态机,在这个状态机中有三个状态:
* 关闭(Closed):在这种状态下,我们需要一个计数器来记录调用失败的次数和总的请求次数,如果在某个时间窗口内,失败的失败率达到预设的阈值,则切换到断开状态,此时开启一个超时时间,当到达该时间则切换到半关闭状态,该超时时间是给了系统一次机会来修正导致调用失败的错误,以回到正常的工作状态。在关闭状态下,调用错误是基于时间的,在特定的时间间隔内会重置,这能够防止偶然错误导致熔断器进去断开状态
* 打开(Open):在该状态下,发起请求时会立即返回错误,一般会启动一个超时计时器,当计时器超时后,状态切换到半打开状态,也可以设置一个定时器,定期的探测服务是否恢复
* 半打开(Half-Open):在该状态下,允许应用程序一定数量的请求发往被调用服务,如果这些调用正常,那么可以认为被调用服务已经恢复正常,此时熔断器切换到关闭状态,同时需要重置计数。如果这部分仍有调用失败的情况,则认为被调用方仍然没有恢复,熔断器会切换到关闭状态,然后重置计数器,半打开状态能够有效防止正在恢复中的服务被突然大量请求再次打垮
![breaker](/img/breaker.png)
### 自适应熔断器
`go-zero`中熔断器的实现参考了[`Google Sre`](https://landing.google.com/sre/sre-book/chapters/handling-overload/)过载保护算法,该算法的原理如下:
当服务处于过载时,有请求到达就该迅速拒绝该请求,返回一个“服务过载”类型的错误,该回复应该比真正处理该请求所消耗的资源少得多。然而,这种逻辑其实不适用于所有请求。例如,拒绝一个执行简单内存查询的请求可能跟实际执行该请求消耗内存差不多(因为这里主要的消耗是在应用层协议解析中,结果的产生部分很简单)。就算在某些情况下,拒绝请求可以节省大量资源,发送这些拒绝回复仍然会消耗一定数量的资源。如果拒绝回复的数量也很多,这些资源消耗可能也十分可观。在这种情况下,有可能该服务在忙着不停地发送拒绝回复时一样会进入过载状态。
客户端侧的节流机制可以解决这个问题。当某个客户端检测到最近的请求错误中的一大部分都是由于“服务过载”错误导致时,该客户端开始自行限制请求速度,限制它自己生成请求的数量。超过这个请求数量限制的请求直接在本地回复失败,而不会真正发到网络层。
我们使用一种称为自适应节流的技术来实现客户端节流。具体地说,每个客户端记录过去两分钟内的以下信息:
* 请求数量requests应用层代码发出的所有请求的数量总计指运行于自适应节流系统之上的应用代码。
* 请求接受数量accepts后端任务接受的请求数量。
在常规情况下这两个值是相等的。随着后端任务开始拒绝请求请求接受数量开始比请求数量小了。客户端可以继续发送请求直到requests=K * accepts一旦超过这个限制客户端开始自行节流新的请求会在本地直接以一定概率被拒绝在客户端内部概率使用如下公进行计算
![breaker](/img/breaker_algo.png)
当客户端开始自己拒绝请求时requests会持续上升而继续超过accepts。这里虽然看起来有点反直觉因为本地拒绝的请求实际没有到达后端但这恰恰是这个算法的重点。随着客户端发送请求的速度加快相对后端接受请求的速度来说我们希望提高本地丢弃请求的概率。
我们发现自适应节流算法在实际中效果良好可以整体上保持一个非常稳定的请求速率。即使在超大型的过载情况下后端服务基本上可以保持50%的处理率。这个方式的一大优势是客户端完全依靠本地信息来做出决定,同时实现算法相对简单:不增加额外的依赖,也不会影响延迟。
对那些处理请求消耗的资源和拒绝请求的资源相差无几的系统来说允许用50%的资源来发送拒绝请求可能是不合理的。在这种情况下解决方案很简单通过修改客户端中算法的accepts的倍值K例如2就可解决
* 降低该倍值会使自适应节流算法更加激进。
* 增加该倍值会使该算法变得不再那么激进。
举例来说假设将客户端请求的上限从request=2 * accepts调整为request=1.1* accepts那么就意味着每10个后端请求之中只有1个会被拒绝。一般来说推荐采用K=2通过允许后端接收到比期望值更多的请求浪费了一定数量的后端资源但是却加快了后端状态到客户端的传递速度。举例来说后端停止拒绝该客户端的请求之后所有客户端检测到这个变化的耗时就会减小。
```go title="go-zero/core/breaker/googlebreaker.go"
type googleBreaker struct {
k float64 // 倍值 默认1.5
stat *collection.RollingWindow // 滑动时间窗口,用来对请求失败和成功计数
proba *mathx.Proba // 动态概率
}
```
自适应熔断算法实现:
```go title="go-zero/core/breaker/googlebreaker.go"
func (b *googleBreaker) accept() error {
accepts, total := b.history() // 请求接受数量和请求总量
weightedAccepts := b.k * float64(accepts)
// 计算丢弃请求概率
dropRatio := math.Max(0, (float64(total-protection)-weightedAccepts)/float64(total+1))
if dropRatio <= 0 {
return nil
}
// 动态判断是否触发熔断
if b.proba.TrueOnProba(dropRatio) {
return ErrServiceUnavailable
}
return nil
}
```
每次发起请求会调用doReq方法在这个方法中首先通过accept校验是否触发熔断acceptable用来判断哪些error会计入失败计数定义如下
```go title="go-zero/zrpc/internal/codes/accept.go"
func Acceptable(err error) bool {
switch status.Code(err) {
case codes.DeadlineExceeded, codes.Internal, codes.Unavailable, codes.DataLoss: // 异常请求错误
return false
default:
return true
}
}
```
如果请求正常则通过markSuccess把请求数量和请求接受数量都加一如果请求不正常则只有请求数量会加一
```go title="go-zero/core/breaker/googlebreaker.go"
func (b *googleBreaker) doReq(req func() error, fallback func(err error) error, acceptable Acceptable) error {
// 判断是否触发熔断
if err := b.accept(); err != nil {
if fallback != nil {
return fallback(err)
} else {
return err
}
}
defer func() {
if e := recover(); e != nil {
b.markFailure()
panic(e)
}
}()
// 执行真正的调用
err := req()
// 正常请求计数
if acceptable(err) {
b.markSuccess()
} else {
// 异常请求计数
b.markFailure()
}
return err
}
```
### 使用示例
go-zero框架中默认开启熔断保护不需要额外再配置
:::tip
在非go-zero项目中如想实现熔断也可单独移植过去使用
:::
当触发熔断会报如下错误:
```go title="go-zero/core/breaker/breaker.go"
var ErrServiceUnavailable = errors.New("circuit breaker is open")
````
[使用示例](https://github.com/zeromicro/zero-examples/tree/main/breaker)

View File

@ -0,0 +1,191 @@
---
sidebar_position: 4
---
# 缓存
### 前言
大家可以想一想:我们在流量激增的情况下,服务端哪个部分最有可能会是第一个瓶颈?我相信大部分人遇到的都会是数据库首先扛不住,量一起来,数据库慢查询,甚至卡死。此时,上层服务有怎么强的治理能力都是无济于事的。
所以我们常说看一个系统架构设计的好不好,很多时候看看缓存设计的如何就知道了。我们曾经遇到过这样的问题,在我加入之前,我们的服务是没有缓存的,虽然当时流量还不算高,但是每天到流量高峰时间段,大家就会特别紧张,一周宕机好几回,数据库直接被打死,然后啥也干不了,只能重启;我当时还是顾问,看了看系统设计,只能救急,就让大家先加上了缓存,但是由于大家对缓存的认知不够以及老系统的混乱,每个业务开发人员都会按照自己的方式来手撕缓存。这样导致的问题就是缓存用了,但是数据七零八落,压根没有办法保证数据的一致性。这确实是一个比较痛苦的经历,应该能引起大家的共鸣和回忆。
然后我把整个系统推倒重新设计了,其中缓存部分的架构设计在其中作用非常明显,于是有了今天的分享。
我主要分为以下几个部分跟大家探讨:
- 缓存系统常见问题
- 单行查询的缓存与自动管理
- 多行查询缓存机制
- 分布式缓存系统设计
- 缓存代码自动化实践
缓存系统涉及的问题和知识点是比较多的,我分为以下几个方面来讨论:
- 稳定性
- 正确性
- 可观测性
- 规范落地和工具建设
### 缓存系统稳定性
![system stability](/img/system-stability.png)
缓存稳定性方面,网上基本所有的缓存相关文章和分享都会讲到三个重点:
- 缓存穿透
- 缓存击穿
- 缓存雪崩
为什么首先讲缓存稳定性呢大家可以回忆一下我们何时会引入缓存一般都是当DB有压力甚至经常被打挂的情况下才会引入缓存所以我们首先就是为了解决稳定性的问题而引入缓存系统的。
### 缓存穿透
![Cache Penetration](/img/cache-penetration.png)
缓存穿透存在的原因是请求不存在的数据从图中我们可以看到对同一个数据的请求1会先去访问缓存但是因为数据不存在所以缓存里肯定没有那么就落到DB去了对同一个数据的请求2、请求3也同样会透过缓存落到DB去这样当大量请求不存在的数据时DB压力就会特别大尤其是可能会恶意请求打垮不怀好意的人发现一个数据不存在然后就大量发起对这个不存在数据的请求
`go-zero` 的解决方法是对于不存在的数据的请求我们也会在缓存里短暂比如一分钟存放一个占位符这样对同一个不存在数据的DB请求数就会跟实际请求数解耦了当然在业务侧也可以在新增数据时删除该占位符以确保新增数据可以立刻查询到。
### 缓存击穿
缓存击穿的原因是热点数据的过期因为是热点数据所以一旦过期可能就会有大量对该热点数据的请求同时过来这时如果所有请求在缓存里都找不到数据如果同时落到DB去的话那么DB就会瞬间承受巨大的压力甚至直接卡死。
`go-zero` 的解决方法是:对于相同的数据我们可以借助于 `core/syncx/SharedCalls` 来确保同一时间只有一个请求落到DB对同一个数据的其它请求等待第一个请求返回并共享结果或错误根据不同的并发场景我们可以选择使用进程内的锁并发量不是非常高或者分布式锁并发量很高。如果不是特别需要我们一般推荐进程内的锁即可毕竟引入分布式锁会增加复杂度和成本借鉴奥卡姆剃刀理论如非必要勿增实体。
![cache breakdown](/img/cache-breakdown.png)
我们来一起看一下上图缓存击穿防护流程,我们用不同颜色表示不同请求:
- 绿色请求首先到达发现缓存里没有数据就去DB查询
- 粉色请求到达请求相同数据发现已有请求在处理中等待绿色请求返回singleflight模式
- 绿色请求返回,粉色请求用绿色请求共享的结果返回
- 后续请求,比如蓝色请求就可以直接从缓存里获取数据了
### 缓存雪崩
缓存雪崩的原因是大量同时加载的缓存有相同的过期时间在过期时间到达的时候出现短时间内大量缓存过期这样就会让很多请求同时落到DB去从而使DB压力激增甚至卡死。
比如疫情下在线教学场景高中、初中、小学是分几个时间段同时开课的那么这时就会有大量数据同时加载并且设置了相同的过期时间在过期时间到达的时候就会对等出现一个一个的DB请求波峰这样的压力波峰会传递到下一个周期甚至出现叠加。
`go-zero` 的解决方法是:
- 使用分布式缓存,防止单点故障导致的缓存雪崩
- 在过期时间上加上5%的标准偏差5%是假设检验里P值的经验值有兴趣的读者可以自行查阅
![cache avalanche](/img/cache-avalanche.png)
我们做个实验如果用1万个数据过期时间设为1小时标准偏差设为5%那么过期时间会比较均匀的分布在3400~3800秒之间。如果我们的默认过期时间是7天那么就会均匀分布在以7天为中心点的16小时内。这样就可以很好的防止了缓存的雪崩问题。
### 缓存正确性
我们引入缓存的初衷是为了减小DB压力增加系统稳定性所以我们一开始关注的是缓存系统的稳定性。当稳定性解决之后一般我们就会面临数据正确性问题可能会经常遇到『明明数据更新了为啥还是显示老的呢』这类问题。这就是我们常说的『缓存数据一致性』问题了接下来我们仔细下分析其产生的原因及应对方法。
### 数据更新常见做法
首先我们讲数据一致性的前提是我们DB的更新和缓存的删除不会当成一个原子操作来看待因为在高并发的场景下我们不可能引入一个分布式锁来把这两者绑定为一个原子操作如果绑定的话就会很大程度上影响并发性能而且增加系统复杂度所以我们只会追求数据的最终一致性且本文只针对非追求强一致性要求的高并发场景金融支付等同学自行判断。
常见数据更新方式有两大类,其余基本都是这两类的变种:
#### 先删缓存,再更新数据库
![delete update](/img/delete-update.png)
这种做法是遇到数据更新我们先去删除缓存然后再去更新DB如左图。让我们来看一下整个操作的流程
- A请求需要更新数据先删除对应的缓存还未更新DB
- B请求来读取数据
- B请求看到缓存里没有就去读取DB并将旧数据写入缓存脏数据
- A请求更新DB
可以看到B请求将脏数据写入了缓存如果这是一个读多写少的数据可能脏数据会存在比较长的时间要么有后续更新要么等待缓存过期这是业务上不能接受的。
#### 先更新数据库,再删除缓存
![update delete](/img/update-delete.png)
上图的右侧部分可以看到在A更新DB和删除缓存之间B请求会读取到老数据因为此时A操作还没有完成并且这种读到老数据的时间是非常短的可以满足数据最终一致性要求。
上图可以看到我们用的是删除缓存,而不是更新缓存,原因如下图:
![ab op](/img/ab-op.png)
上图我用操作代替了删除或更新当我们做删除操作时A先删还是B先删没有关系因为后续读取请求都会从DB加载出最新数据但是当我们对缓存做的是更新操作时就会对A先更新缓存还是B先更新缓存敏感了如果A后更新那么缓存里就又存在脏数据了所以 go-zero 只使用删除缓存的方式。
我们来一起看看完整的请求处理流程:
![complete process](/img/complete-process.png)
注意:不同颜色代表不同请求。
- 请求1更新DB
- 请求2查询同一个数据返回了老的数据这个短时间内返回旧数据是可以接受的满足最终一致性
- 请求1删除缓存
- 请求3再来请求时缓存里没有就会查询数据库并回写缓存再返回结果
- 后续的请求就会直接读取缓存了
对于下图的场景,我们该怎么应对?
![caching scenarios](/img/caching-scenarios.png)
让我们来一起分析一下这个问题的几种可能解法:
- 利用分布式锁让每次的更新变成一个原子操作。这种方法最不可取,就相当于自废武功,放弃了高并发能力,去追求强一致性,别忘了我之前文章强调过『这个系列文章只针对非追求强一致性要求的高并发场景,金融支付等同学自行判断』,所以这种解法我们首先放弃。
- 把 A删除缓存 加上延迟比如过1秒再执行此操作。这样的坏处是为了解决这种概率极低的情况而让所有的更新在1秒内都只能获取旧数据。这种方法也不是很理想我们也不希望使用。
- 把 A删除缓存 这里改成设置一个特殊占位符,并让 B设置缓存 用 redis 的 setnx 指令,然后后续请求遇到这个特殊占位符时重新请求缓存。这个方法相当于在删除缓存时加了一种新的状态,我们来看下图的情况
![cache placeholder](/img/cache-placeholder.png)
是不是又绕回来了因为A请求在遇到占位符时必须强行设置缓存或者判断是不是内容为占位符。所以这也解决不了问题。
那我们看看 go-zero 是怎么应对这种情况的,我们选择对这种情况不做处理,是不是很吃惊?那么我们回到原点来分析这种情况是怎么发生的:
- 对读请求的数据没有缓存压根没加载到缓存或者缓存已失效触发了DB读取
- 此时来了一个对该数据的更新操作
- 需要满足这样的顺序B请求读DB -> A请求写DB -> A请求删除缓存 -> B请求设置缓存
我们都知道DB的写操作需要锁行记录是个慢操作而读操作不需要所以此类情况相对发生的概率比较低。而且我们有设置过期时间现实场景遇到此类情况概率极低要真正解决这类问题我们就需要通过 2PC 或是 Paxos 协议保证一致性,我想这都不是大家想用的方法,太复杂了!
做架构最难的我认为是懂得取舍trade-off寻找最佳收益的平衡点是非常考验综合能力的。
### 缓存可观测性
前面两篇文章我们解决了缓存的稳定性和数据一致性问题,此时我们的系统已经充分享受到了缓存带来的价值,解决了从零到一的问题,那么我们接下来要考虑的是如何进一步降低使用成本,判断哪些缓存带来了实际的业务价值,哪些可以去掉,从而降低服务器成本,哪些缓存我需要增加服务器资源,各个缓存的 qps 是多少,命中率多少,有没有需要进一步调优等。
![cache log](/img/cache-log.png)
上图是一个服务的缓存监控日志可以看出这个缓存服务的每分钟有5057个请求其中99.7%的请求都命中了缓存只有13个落到DB了DB都成功返回了。从这个监控可以看到这个缓存服务把DB压力降低了三个数量级90%命中是一个数量级99%命中是两个数量级99.7%差不多三个数量级了),可以看出这个缓存的收益是相当可以的。
但如果反过来缓存命中率只有0.3%的话就没什么收益了,那么我们就应该把这个缓存去掉,一是可以降低系统复杂度(如非必要,勿增实体嘛),二是可以降低服务器成本。
如果这个服务的 qps 特别高足以对DB造成较大压力那么如果缓存命中率只有50%,就是说我们降低了一半的压力,我们应该根据业务情况考虑增加过期时间来增加缓存命中率。
如果这个服务的 qps 特别高(足以对缓存造成较大压力),缓存命中率也很高,那么我们可以考虑增加缓存能够承载的 qps 或者加上进程内缓存来降低缓存的压力。
所有这些都是基于缓存监控的,只有可观测了,我们才能做进一步有针对性的调优和简化,我也一直强调『没有度量,就没有优化』。
### 如何让缓存被规范使用?
了解 go-zero 设计思路或者看过我的分享视频的同学可能对我经常讲的『工具大于约定和文档』有印象。
对于缓存来说,知识点是非常繁多的,每个人写出的缓存代码一定会风格迥异,而且所有知识点都写对是非常难的,就像我这种写了那么多年程序的老鸟来说,一次让我把所有知识点都写对,依然是非常困难的。那么 go-zero 是怎么解决这个问题的呢?
- 尽可能把抽象出来的通用解决方法封装到框架里。这样整个缓存的控制流程就不需要大家来操心了,只要你调用正确的方法,就没有出错的可能性。
- 把从建表 sql 到 CRUD + Cache 的代码都通过工具一键生成。避免了大家去根据表结构写一堆结构和控制逻辑。
![cache generate](/img/cache-generate.png)
这是从 go-zero 的官方示例 `bookstore` 里截的一个 `CRUD + Cache` 的生成说明。我们可以通过指定的建表 `sql` 文件或者 `datasource` 来提供给 `goctl` 所需的 `schema`,然后 `goctl``model` 子命令可以一键生成所需的 `CRUD + Cache` 代码。
这样就确保了所有人写的缓存代码都是一样的,工具生成能不一样吗?:P

View File

@ -0,0 +1,218 @@
---
sidebar_position: 8
---
# 服务发现
### 什么是服务注册发现?
对于搞微服务的同学来说,服务注册、服务发现的概念应该不会太陌生。
简单来说当服务A需要依赖服务B时我们就需要告诉服务A哪里可以调用到服务B这就是服务注册发现要解决的问题。
![discovery](/img/discovery.png)
- Service B 把自己注册到 Service Registry 叫做 服务注册
- Service A 从 Service Registry 发现 Service B 的节点信息叫做 服务发现
### 服务注册
服务注册是针对服务端的,服务启动后需要注册,分为几个部分:
- 启动注册
- 定时续期
- 退出撤销
#### 启动注册
当一个服务节点起来之后,需要把自己注册到 `Service Registry` 上,便于其它节点来发现自己。注册需要在服务启动完成并可以接受请求时才会去注册自己,并且会设置有效期,防止进程异常退出后依然被访问。
#### 定时续期
定时续期相当于 `keep alive`,定期告诉 `Service Registry` 自己还在,能够继续服务。
#### 退出撤销
当进程退出时我们应该主动去撤销注册信息便于调用方及时将请求分发到别的节点。同时go-zero 通过自适应的负载均衡来保证即使节点退出没有主动注销,也能及时摘除该节点。
### 服务发现
服务发现是针对调用端的,一般分为两类问题:
- 存量获取
- 增量侦听
还有一个常见的工程问题是
- 应对服务发现故障
当服务发现服务(比如 etcd, consul, nacos等出现问题的时候我们不要去修改已经获取到的 `endpoints` 列表,从而可以更好的确保 etcd 等宕机后所依赖的服务依然可以正常交互。
#### 存量获取
![get data](/img/get-data.png)
`Service A` 启动时,需要从 `Service Registry` 获取 `Service B` 的已有节点列表:`Service B1`, `Service B2`, `Service B3`,然后根据自己的负载均衡算法来选择合适的节点发送请求。
#### 增量侦听
上图已经有了 `Service B1`, `Service B2`, `Service B3`,如果此时又启动了 `Service B4`,那么我们就需要通知 `Service A` 有个新增的节点。如图:
![new node](/img/new-node.png)
#### 应对服务发现故障
对于服务调用方来说,我们都会在内存里缓存一个可用节点列表。不管是使用 `etcd``consul` 或者 `nacos` 等,我们都可能面临服务发现集群故障,以 etcd 为例,当遇到 etcd 故障时,我们就需要冻结 Service B 的节点信息而不去变更,此时一定不能去清空节点信息,一旦清空就无法获取了,而此时 Service B 的节点很可能都是正常的,并且 go-zero 会自动隔离和恢复故障节点。
![discovery trouble](/img/discovery-trouble.png)
服务注册、服务发现的基本原理大致如此,当然实现起来还是比较复杂的,接下来我们一起看看 `go-zero` 里支持哪些服务发现的方式。
### go-zero 之内置服务发现
`go-zero` 默认支持三种服务发现方式:
- 直连
- 基于 etcd 的服务发现
- 基于 kubernetes endpoints 的服务发现
#### 直连
直连是最简单的方式,当我们的服务足够简单时,比如单机即可承载我们的业务,我们可以直接只用这种方式。
![direct connection](/img/direct-connection.png)
`rpc` 的配置文件里直接指定 `endpoints` 即可,比如:
```go
Rpc:
Endpoints:
- 192.168.0.111:3456
- 192.168.0.112:3456
````
`zrpc` 调用端就会分配负载到这两个节点上,其中一个节点有问题时 `zrpc` 会自动摘除,等节点恢复时会再次分配负载。
这个方法的缺点是不能动态增加节点,每次新增节点都需要修改调用方配置并重启。
#### 基于 etcd 的服务发现
当我们的服务有一定规模之后,因为一个服务可能会被很多个服务依赖,我们就需要能够动态增减节点,而无需修改很多的调用方配置并重启。
常见的服务发现方案有 `etcd`, `consul`, `nacos` 等。
![discovery etcd](/img/discovery-etcd.png)
`go-zero` 内置集成了基于 `etcd` 的服务发现方案,具体使用方法如下:
```go
Rpc:
Etcd:
Hosts:
- 192.168.0.111:2379
- 192.168.0.112:2379
- 192.168.0.113:2379
Key: user.rpc
```
- Hosts 是 etcd 集群地址
- Key 是服务注册上去的 key
#### 基于 Kubernetes Endpoints 的服务发现
如果我们的服务都是部署在 `Kubernetes` 集群上的话Kubernetes 本身是通过自带的 `etcd` 管理集群状态的,所有的服务都会把自己的节点信息注册到 `Endpoints` 对象,我们可以直接给 `deployment` 权限去读取集群的 `Endpoints` 对象即可获得节点信息。
![discovery k8s](/img/discovery-k8s.png)
- `Service B` 的每个 `Pod` 启动时,会将自己注册到集群的 `Endpoints`
- `Service A` 的每个 `Pod` 启动时,可以从集群的 `Endpoints` 里获取 `Service B` 的节点信息
- 当 `Service B` 的节点发生改变时,`Service A` 可以通过 `watch` 集群的 `Endpoints` 感知到
在这个机制工作之前,我们需要配置好当前 `namespace``pod` 对集群 `Endpoints` 访问权限,这里有三个概念:
- ClusterRole
- 定义集群范围的权限角色,不受 namespace 控制
- ServiceAccount
- 定义 namespace 范围内的 service account
- ClusterRoleBinding
- 将定义好的 ClusterRole 和不同 namespace 的 ServiceAccount 进行绑定
具体的 Kubernetes 配置文件可以参考 这里,其中 namespace 按需修改。
注意:当启动时报没有权限获取 Endpoints 时记得检查这些配置有没落实 :)
zrpc 的基于 `Kubernetes Endpoints` 的服务发现使用方法如下:
```go
Rpc:
Target: k8s://mynamespace/myservice:3456
```
其中:
- `mynamespace`:被调用的 `rpc` 服务所在的 `namespace`
- `myservice`:被调用的 `rpc` 服务的名字
- `3456`:被调用的 `rpc` 服务的端口
在创建 `deployment` 配置文件时一定要加上 `serviceAccountName` 来指定使用哪个 `ServiceAccount`,示例如下:
```go
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine-deployment
labels:
app: alpine
spec:
replicas: 1
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
serviceAccountName: endpoints-reader
containers:
- name: alpine
image: alpine
command:
- sleep
- infinity
```
注意其中 `serviceAccountName` 指定该 `deployment` 创建出来的 `pod` 用哪个 `ServiceAccount`
`server``client` 都部署到 `Kubernetes` 集群里之后可以通过以下命令滚动重启所有 `server` 节点
```go
kubectl rollout restart deploy -n adhoc server-deployment
```
利用如下命令查看 `client` 节点日志:
```go
kubectl -n adhoc logs -f deploy/client-deployment --all-containers=true
```
可以看到我们的服务发现机制完美跟进了 `server` 节点的变化,并且在服务更新期间没有出现异常请求。
:::tip
完整代码示例见 https://github.com/zeromicro/zero-examples/tree/main/discovery/k8s
:::

View File

@ -0,0 +1,440 @@
---
sidebar_position: 6
---
# 降载
### 为什么需要降载
微服务集群中,调用链路错综复杂,作为服务提供者需要有一种保护自己的机制,防止调用方无脑调用压垮自己,保证自身服务的高可用。
最常见的保护机制莫过于限流机制,使用限流器的前提是必须知道自身的能够处理的最大并发数,一般在上线前通过压测来得到最大并发数,而且日常请求过程中每个接口的限流参数都不一样,同时系统一直在不断的迭代其处理能力往往也会随之变化,每次上线前都需要进行压测然后调整限流参数变得非常繁琐。
那么有没有一种更加简洁的限流机制能实现最大限度的自我保护呢?
### 什么是自适应降载
自适应降载能非常智能的保护服务自身,根据服务自身的系统负载动态判断是否需要降载。
设计目标:
- 保证系统不被拖垮。
- 在系统稳定的前提下,保持系统的吞吐量。
那么关键就在于如何衡量服务自身的负载呢?
判断高负载主要取决于两个指标:
- cpu 是否过载。
- 最大并发数是否过载。
以上两点同时满足时则说明服务处于高负载状态,则进行自适应降载。
同时也应该注意高并发场景 cpu 负载、并发数往往波动比较大,从数据上我们称这种现象为毛刺,毛刺现象可能会导致系统一直在频繁的进行自动降载操作,所以我们一般获取一段时间内的指标均值来使指标更加平滑。实现上可以采用准确的记录一段时间内的指标然后直接计算平均值,但是需要占用一定的系统资源。
统计学上有一种算法滑动平均exponential moving average可以用来估算变量的局部均值使得变量的更新与历史一段时间的历史取值有关无需记录所有的历史局部变量就可以实现平均值估算非常节省宝贵的服务器资源。
滑动平均算法原理 参考这篇文章讲的非常清楚。
变量 V 在 t 时刻记为 Vtθt 为变量 V 在 t 时刻的取值,即在不使用滑动平均模型时 Vt=θt在使用滑动平均模型后Vt 的更新公式如下:
```shell
Vt=β⋅Vt1+(1β)⋅θt
```
- β = 0 时 Vt = θt
- β = 0.9 时,大致相当于过去 10 个 θt 值的平均
- β = 0.99 时,大致相当于过去 100 个 θt 值的平均
### 代码实现
接下来我们来看下 go-zero 自适应降载的代码实现。
![load](/img/load.png)
自适应降载接口定义:
```go title="core/load/adaptiveshedder.go"
// 回调函数
Promise interface {
// 请求成功时回调此函数
Pass()
// 请求失败时回调此函数
Fail()
}
// 降载接口定义
Shedder interface {
// 降载检查
// 1. 允许调用,需手动执行 Promise.accept()/reject()上报实际执行任务结构
// 2. 拒绝调用将会直接返回err服务过载错误 ErrServiceOverloaded
Allow() (Promise, error)
}
```
接口定义非常精简意味使用起来其实非常简单,对外暴露一个`Allow()(Promise,error)。
go-zero 使用示例:
业务中只需调该方法判断是否降载,如果被降载则直接结束流程,否则执行业务最后使用返回值 Promise 根据执行结果回调结果即可。
```go
func UnarySheddingInterceptor(shedder load.Shedder, metrics *stat.Metrics) grpc.UnaryServerInterceptor {
ensureSheddingStat()
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler) (val interface{}, err error) {
sheddingStat.IncrementTotal()
var promise load.Promise
// 检查是否被降载
promise, err = shedder.Allow()
// 降载,记录相关日志与指标
if err != nil {
metrics.AddDrop()
sheddingStat.IncrementDrop()
return
}
// 最后回调执行结果
defer func() {
// 执行失败
if err == context.DeadlineExceeded {
promise.Fail()
// 执行成功
} else {
sheddingStat.IncrementPass()
promise.Pass()
}
}()
// 执行业务方法
return handler(ctx, req)
}
}
```
接口实现类定义
主要包含三类属性
- cpu 负载阈值:超过此值意味着 cpu 处于高负载状态。
- 冷却期:假如服务之前被降载过,那么将进入冷却期,目的在于防止降载过程中负载还未降下来立马加压导致来回抖动。因为降低负载需要一定的时间,处于冷却期内应该继续检查并发数是否超过限制,超过限制则继续丢弃请求。
- 并发数:当前正在处理的并发数,当前正在处理的并发平均数,以及最近一段内的请求数与响应时间,目的是为了计算当前正在处理的并发数是否大于系统可承载的最大并发数。
```go
// option参数模式
ShedderOption func(opts *shedderOptions)
// 可选配置参数
shedderOptions struct {
// 滑动时间窗口大小
window time.Duration
// 滑动时间窗口数量
buckets int
// cpu负载临界值
cpuThreshold int64
}
// 自适应降载结构体,需实现 Shedder 接口
adaptiveShedder struct {
// cpu负载临界值
// 高于临界值代表高负载需要降载保证服务
cpuThreshold int64
// 1s内有多少个桶
windows int64
// 并发数
flying int64
// 滑动平滑并发数
avgFlying float64
// 自旋锁,一个服务共用一个降载
// 统计当前正在处理的请求数时必须加锁
// 无损并发,提高性能
avgFlyingLock syncx.SpinLock
// 最后一次拒绝时间
dropTime *syncx.AtomicDuration
// 最近是否被拒绝过
droppedRecently *syncx.AtomicBool
// 请求数统计,通过滑动时间窗口记录最近一段时间内指标
passCounter *collection.RollingWindow
// 响应时间统计,通过滑动时间窗口记录最近一段时间内指标
rtCounter *collection.RollingWindow
}
```
自适应降载构造器:
```go
func NewAdaptiveShedder(opts ...ShedderOption) Shedder {
// 为了保证代码统一
// 当开发者关闭时返回默认的空实现,实现代码统一
// go-zero很多地方都采用了这种设计比如Breaker日志组件
if !enabled.True() {
return newNopShedder()
}
// options模式设置可选配置参数
options := shedderOptions{
// 默认统计最近5s内数据
window: defaultWindow,
// 默认桶数量50个
buckets: defaultBuckets,
// cpu负载
cpuThreshold: defaultCpuThreshold,
}
for _, opt := range opts {
opt(&options)
}
// 计算每个窗口间隔时间默认为100ms
bucketDuration := options.window / time.Duration(options.buckets)
return &adaptiveShedder{
// cpu负载
cpuThreshold: options.cpuThreshold,
// 1s的时间内包含多少个滑动窗口单元
windows: int64(time.Second / bucketDuration),
// 最近一次拒绝时间
dropTime: syncx.NewAtomicDuration(),
// 最近是否被拒绝过
droppedRecently: syncx.NewAtomicBool(),
// qps统计滑动时间窗口
// 忽略当前正在写入窗口(桶),时间周期不完整可能导致数据异常
passCounter: collection.NewRollingWindow(options.buckets, bucketDuration,
collection.IgnoreCurrentBucket()),
// 响应时间统计,滑动时间窗口
// 忽略当前正在写入窗口(桶),时间周期不完整可能导致数据异常
rtCounter: collection.NewRollingWindow(options.buckets, bucketDuration,
collection.IgnoreCurrentBucket()),
}
}
```
降载检查 Allow()
检查当前请求是否应该被丢弃,被丢弃业务侧需要直接中断请求保护服务,也意味着降载生效同时进入冷却期。如果放行则返回 promise等待业务侧执行回调函数执行指标统计。
```go
// 降载检查
func (as *adaptiveShedder) Allow() (Promise, error) {
// 检查请求是否被丢弃
if as.shouldDrop() {
// 设置drop时间
as.dropTime.Set(timex.Now())
// 最近已被drop
as.droppedRecently.Set(true)
// 返回过载
return nil, ErrServiceOverloaded
}
// 正在处理请求数加1
as.addFlying(1)
// 这里每个允许的请求都会返回一个新的promise对象
// promise内部持有了降载指针对象
return &promise{
start: timex.Now(),
shedder: as,
}, nil
}
```
检查是否应该被丢弃shouldDrop()
```go
// 请求是否应该被丢弃
func (as *adaptiveShedder) shouldDrop() bool {
// 当前cpu负载超过阈值
// 服务处于冷却期内应该继续检查负载并尝试丢弃请求
if as.systemOverloaded() || as.stillHot() {
// 检查正在处理的并发是否超出当前可承载的最大并发数
// 超出则丢弃请求
if as.highThru() {
flying := atomic.LoadInt64(&as.flying)
as.avgFlyingLock.Lock()
avgFlying := as.avgFlying
as.avgFlyingLock.Unlock()
msg := fmt.Sprintf(
"dropreq, cpu: %d, maxPass: %d, minRt: %.2f, hot: %t, flying: %d, avgFlying: %.2f",
stat.CpuUsage(), as.maxPass(), as.minRt(), as.stillHot(), flying, avgFlying)
logx.Error(msg)
stat.Report(msg)
return true
}
}
return false
}
```
cpu 阈值检查 systemOverloaded()
cpu 负载值计算算法采用的滑动平均算法,防止毛刺现象。每隔 250ms 采样一次 β 为 0.95,大概相当于历史 20 次 cpu 负载的平均值,时间周期约为 5s。
```go
// cpu 是否过载
func (as *adaptiveShedder) systemOverloaded() bool {
return systemOverloadChecker(as.cpuThreshold)
}
// cpu 检查函数
systemOverloadChecker = func(cpuThreshold int64) bool {
return stat.CpuUsage() >= cpuThreshold
}
// cpu滑动平均值
curUsage := internal.RefreshCpu()
prevUsage := atomic.LoadInt64(&cpuUsage)
// cpu = cpuᵗ⁻¹ * beta + cpuᵗ * (1 - beta)
// 滑动平均算法
usage := int64(float64(prevUsage)*beta + float64(curUsage)*(1-beta))
atomic.StoreInt64(&cpuUsage, usage)
```
检查是否处于冷却期 stillHot:
判断当前系统是否处于冷却期,如果处于冷却期内,应该继续尝试检查是否丢弃请求。主要是防止系统在过载恢复过程中负载还未降下来,立马又增加压力导致来回抖动,此时应该尝试继续丢弃请求。
```go
func (as *adaptiveShedder) stillHot() bool {
// 最近没有丢弃请求
// 说明服务正常
if !as.droppedRecently.True() {
return false
}
// 不在冷却期
dropTime := as.dropTime.Load()
if dropTime == 0 {
return false
}
// 冷却时间默认为1s
hot := timex.Since(dropTime) < coolOffDuration
// 不在冷却期,正常处理请求中
if !hot {
// 重置drop记录
as.droppedRecently.Set(false)
}
return hot
}
```
检查当前正在处理的并发数highThru()
一旦 当前处理的并发数 > 并发数承载上限 则进入降载状态。
这里为什么要加锁呢?因为自适应降载时全局在使用的,为了保证并发数平均值正确性。
为什么这里要加自旋锁呢?因为并发处理过程中,可以不阻塞其他的 goroutine 执行任务,采用无锁并发提高性能。
```go
func (as *adaptiveShedder) highThru() bool {
// 加锁
as.avgFlyingLock.Lock()
// 获取滑动平均值
// 每次请求结束后更新
avgFlying := as.avgFlying
// 解锁
as.avgFlyingLock.Unlock()
// 系统此时最大并发数
maxFlight := as.maxFlight()
// 正在处理的并发数和平均并发数是否大于系统的最大并发数
return int64(avgFlying) > maxFlight && atomic.LoadInt64(&as.flying) > maxFlight
}
```
如何得到正在处理的并发数与平均并发数呢?
当前正在的处理并发数统计其实非常简单,每次允许请求时并发数 +1请求完成后 通过 promise 对象回调-1 即可,并利用滑动平均算法求解平均并发数即可。
```go
type promise struct {
// 请求开始时间
// 统计请求处理耗时
start time.Duration
shedder *adaptiveShedder
}
func (p *promise) Fail() {
// 请求结束,当前正在处理请求数-1
p.shedder.addFlying(-1)
}
func (p *promise) Pass() {
// 响应时间,单位毫秒
rt := float64(timex.Since(p.start)) / float64(time.Millisecond)
// 请求结束,当前正在处理请求数-1
p.shedder.addFlying(-1)
p.shedder.rtCounter.Add(math.Ceil(rt))
p.shedder.passCounter.Add(1)
}
func (as *adaptiveShedder) addFlying(delta int64) {
flying := atomic.AddInt64(&as.flying, delta)
// 请求结束后,统计当前正在处理的请求并发
if delta < 0 {
as.avgFlyingLock.Lock()
// 估算当前服务近一段时间内的平均请求数
as.avgFlying = as.avgFlying*flyingBeta + float64(flying)*(1-flyingBeta)
as.avgFlyingLock.Unlock()
}
}
```
得到了当前的系统数还不够 ,我们还需要知道当前系统能够处理并发数的上限,即最大并发数。
请求通过数与响应时间都是通过滑动窗口来实现的,关于滑动窗口的实现可以参考 自适应熔断器那篇文章。
当前系统的最大并发数 = 窗口单位时间内的最大通过数量 * 窗口单位时间内的最小响应时间。
```go
// 计算每秒系统的最大并发数
// 最大并发数 = 最大请求数qps* 最小响应时间rt
func (as *adaptiveShedder) maxFlight() int64 {
// windows = buckets per second
// maxQPS = maxPASS * windows
// minRT = min average response time in milliseconds
// maxQPS * minRT / milliseconds_per_second
// as.maxPass()*as.windows - 每个桶最大的qps * 1s内包含桶的数量
// as.minRt()/1e3 - 窗口所有桶中最小的平均响应时间 / 1000ms这里是为了转换成秒
return int64(math.Max(1, float64(as.maxPass()*as.windows)*(as.minRt()/1e3)))
}
// 滑动时间窗口内有多个桶
// 找到请求数最多的那个
// 每个桶占用的时间为 internal ms
// qps指的是1s内的请求数qps: maxPass * time.Second/internal
func (as *adaptiveShedder) maxPass() int64 {
var result float64 = 1
// 当前时间窗口内请求数最多的桶
as.passCounter.Reduce(func(b *collection.Bucket) {
if b.Sum > result {
result = b.Sum
}
})
return int64(result)
}
// 滑动时间窗口内有多个桶
// 计算最小的平均响应时间
// 因为需要计算近一段时间内系统能够处理的最大并发数
func (as *adaptiveShedder) minRt() float64 {
// 默认为1000ms
result := defaultMinRt
as.rtCounter.Reduce(func(b *collection.Bucket) {
if b.Count <= 0 {
return
}
// 请求平均响应时间
avg := math.Round(b.Sum / float64(b.Count))
if avg < result {
result = avg
}
})
return result
}
```
### 参考资料
[Google BBR 拥塞控制算法](https://queue.acm.org/detail.cfm?id=3022184)
[滑动平均算法原理](https://www.cnblogs.com/wuliytTaotao/p/9479958.html)
[go-zero 自适应降载](https://go-zero.dev/cn/loadshedding.html)

View File

@ -0,0 +1,161 @@
---
sidebar_position: 10
---
# 指标监控
### 监控接入
`go-zero` 框架中集成了基于 `prometheus` 的服务指标监控。但是没有显式打开,需要开发者在 `config.yaml` 中配置:
```go
Prometheus:
Host: 127.0.0.1
Port: 9091
Path: /metrics
```
如果开发者是在本地搭建 `Prometheus`,需要在 `Prometheus` 的配置文件 `prometheus.yaml` 中写入需要收集服务监控信息的配置:
```go
- job_name: 'file_ds'
static_configs:
- targets: ['your-local-ip:9091']
labels:
job: activeuser
app: activeuser-api
env: dev
instance: your-local-ip:service-port
```
因为本地是用 `docker` 运行的。将 `prometheus.yaml` 放置在 `docker-prometheus` 目录下:
```shell
docker run \
-p 9090:9090 \
-v dockeryml/docker-prometheus:/etc/prometheus \
prom/prometheus
```
打开 `localhost:9090` 就可以看到:
![prometheus](/img/prometheus.png)
点击 `http://service-ip:9091/metrics` 就可以看到该服务的监控信息:
![prometheus data](/img/prometheus-data.png)
上图我们可以看出有两种 `bucket`,以及 `count/sum` 指标。
`go-zero` 是如何集成监控指标?监控的又是什么指标?我们如何定义我们自己的指标?下面就来解释这些问题
:::tip
以上的基本接入可以参看我们的另外一篇https://zeromicro.github.io/go-zero/service-monitor.html
:::
### 如何集成
上面例子中的请求方式是 `HTTP`,也就是在请求服务端时,监控指标数据不断被搜集。很容易想到是 中间件 的功能,具体代码:
```go title="https://github.com/tal-tech/go-zero/blob/master/rest/handler/prometheushandler.go"
var (
metricServerReqDur = metric.NewHistogramVec(&metric.HistogramVecOpts{
...
// 监控指标
Labels: []string{"path"},
// 直方图分布中,统计的桶
Buckets: []float64{5, 10, 25, 50, 100, 250, 500, 1000},
})
metricServerReqCodeTotal = metric.NewCounterVec(&metric.CounterVecOpts{
...
// 监控指标:直接在记录指标 incr() 即可
Labels: []string{"path", "code"},
})
)
func PromethousHandler(path string) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// 请求进入的时间
startTime := timex.Now()
cw := &security.WithCodeResponseWriter{Writer: w}
defer func() {
// 请求返回的时间
metricServerReqDur.Observe(int64(timex.Since(startTime)/time.Millisecond), path)
metricServerReqCodeTotal.Inc(path, strconv.Itoa(cw.Code))
}()
// 中间件放行,执行完后续中间件和业务逻辑。重新回到这,做一个完整请求的指标上报
// [🧅:洋葱模型]
next.ServeHTTP(cw, r)
})
}
}
```
其实整个很简单:
- HistogramVec 负责请求耗时搜集:
- bucket 存放的就是 option 指定的耗时指标。某个请求耗时多少就会被聚集对应的桶,计数。
- 最终展示的就是一个路由在不同耗时的分布,很直观提供给开发者可以优化的区域。
- CounterVec 负责指定 labels 标签搜集:
- Labels: []string{"path", "code"}
- labels 相当一个 tuple。go-zero 是以(path, code)作为整体,记录不同路由不同状态码的返回次数。如果 4xx,5xx过多的时候是不是应该看看你的服务健康程度
### 如何自定义
`go-zero` 中也提供了 `prometheus metric` 基本封装,供开发者自己开发自己 prometheus 中间件。
:::tip
代码https://github.com/tal-tech/go-zero/tree/master/core/metric
:::
| 名称 | 用途 | 搜集函数 |
|----------------|-----------------|----------------------------------------|
| CounterVec | 单一的计数。用作QPS统计 | CounterVec.Inc() 指标+1 |
| GuageVec | 单纯指标记录。适用于磁盘容量CPU/Mem使用率可增加可减少 | GuageVec.Inc()/GuageVec.Add() 指标+1/指标加N也可以为负数 |
| HistogramVec | 反应数值的分布情况。适用于:请求耗时、响应大小 | HistogramVec.Observe(val, labels) 记录指标当前对应值,并找到值所在的桶,+1 |
另外对 `HistogramVec.Observe()` 做一个基本分析:
我们其实可以看到上图每个 HistogramVec 统计都会有3个序列出现
- _count数据个数
- _sum全部数据加和
- _bucket{le=a1}:处于 [-inf, a1] 的数据个数
所以我们也猜测在统计过程中分3种数据进行统计
```go
// 基本上在prometheus的统计都是使用 atomic CAS 方式进行计数的
// 性能要比使用 Mutex 要高
func (h *histogram) observe(v float64, bucket int) {
n := atomic.AddUint64(&h.countAndHotIdx, 1)
hotCounts := h.counts[n>>63]
if bucket < len(h.upperBounds) {
// val 对应数据桶 +1
atomic.AddUint64(&hotCounts.buckets[bucket], 1)
}
for {
oldBits := atomic.LoadUint64(&hotCounts.sumBits)
newBits := math.Float64bits(math.Float64frombits(oldBits) + v)
// sum指标数值 +v毕竟是总数sum
if atomic.CompareAndSwapUint64(&hotCounts.sumBits, oldBits, newBits) {
break
}
}
// count 统计 +1
atomic.AddUint64(&hotCounts.count, 1)
}
```
所以开发者想定义自己的监控指标:
- 在使用 goctl 生成API代码指定要生成的 中间件https://zeromicro.github.io/go-zero/middleware.html
- 在中间件文件书写自己需要统计的指标逻辑
- 当然,开发者也可以在业务逻辑中书写统计的指标逻辑。同上。
上述都是针对 HTTP 部分逻辑的解析RPC 部分的逻辑类似,你可以在 拦截器 部分看到设计。

View File

@ -0,0 +1,110 @@
---
sidebar_position: 1
---
# rest
### 概述
从日常开发经验来说,一个好的 web 框架大致需要满足以下特性:
* 路由匹配/多路由支持
* 支持自定义中间件
* 框架和业务开发完全解耦,方便开发者快速开发
* 参数校验/匹配
* 监控/日志/指标等服务自查功能
* 服务自保护(熔断/限流)
### rest概览
rest有如下特点
* 借助 `context` (不同于 `gin``context`),将资源初始化好 → 保存在 `serviveCtx` 中,在 `handler` 中共享(至于资源池化,交给资源自己处理,`serviveCtx` 只是入口和共享点)
* 独立 router 声明文件,同时加入 router group 的概念,方便开发者整理代码结构
* 内置若干中间件:监控/熔断/鉴权等
* 利用 goctl codegen + option 设计模式,方便开发者自己控制部分中间件的接入
下图描述了 rest 处理请求的模式和大部分处理路径。
* 框架内置的中间件已经帮开发者解决了大部分服务自处理的逻辑
* 同时 go-zero 在 business logic 处也给予开发者开箱即用的组件(dq、fx 等)
* 从开发模式上帮助开发者只需要关注自己的 business logic 以及所需资源准备
![rest](/img/rest.png)
### 启动流程
下图描述了整体 server 启动经过的模块和大致流程。准备按照如下流程分析 rest 实现:
* 基于 http.server 封装以及改造:把 engine(web框架核心) 和 option 隔离开
* 多路由匹配采取 radix-tree 构造
* 中间件采用洋葱模型 → []Middleware
* http parse 解析以及匹配校验 → httpx.Parse()
* 在请求过程会收集指标 (createMetrics()) 以及监控埋点 (prometheus)
![rest_start](/img/rest_start.png)
#### server engine
engine 贯穿整个 server 生命周期中:
* router 会携带开发者定义的 path/handler会在最后的 router.handle() 执行
* 注册的自定义中间件 + 框架中间件,在 router handler logic 前执行
在这里go-zero 处理的粒度在 route 上,封装和处理都在 route 一层层执行
![server_engine](/img/server_engine.jpeg)
### 路由匹配
那么当 request 到来,首先是如何到路由这一层的?
首先在开发最原始的 http server ,都有这么一段代码:
![basic_server](/img/basic_server.png)
`http.ListenAndServe()` 内部会执行到:`server.ListenAndServe()`
我们看看在 rest 里面是怎么运用的:
![rest_route](/img/rest_route.png)
而传入的 handler 其实就是router.NewRouter() 生成的 router。这个 router 承载了整个 server 的处理函数集合。
同时 http.Server 结构在初始化时,是把 handler 注入到里面的:
![rest_route](/img/rest_handle.png)
在 http.Server 接收 req 后,最终执行的也是:`handler.ServeHTTP(rw, req)`
![rest_route](/img/servehttp.png)
所以内置的 `router` 也需要实现 `ServeHTTP` 。至于 `router` 自己是怎么实现 `ServeHTTP` :无外乎就是寻找匹配路由,然后执行路由对应的 `handle logic`
### 解析参数
解析参数是 http 框架需要提供的基本能力。在 goctl code gen 生成的代码中handler 层已经集成了 req argument parse 函数:
![rest_route](/img/rest_parse.png)
进入到 `httpx.Parse()` ,主要解析以下几块:
```go title="https://github.com/zeromicro/go-zero/blob/master/rest/httpx/requests.go#L32:6"
```
* 解析path
* 解析form表单
* 解析http header
* 解析json
:::info
Parse() 中的 参数校验 的功能见:
https://go-zero.dev/cn/api-grammar.html 中的 tag修饰符
:::
### 使用示例
[使用示例](https://github.com/zeromicro/zero-examples/tree/main/http)

View File

@ -0,0 +1,12 @@
---
sidebar_position: 9
---
# 链路追踪
:::tip
正在疯狂的写文档...
[想要为社区贡献文档吗?](../intro/join-us.md#文档贡献)
:::

View File

@ -0,0 +1,4 @@
{
"label": "简介",
"position": 0
}

View File

@ -0,0 +1,26 @@
---
sidebar_position: 1
---
# 关于我们
## go-zero
go-zero 是一个集成了各种工程实践的 web 和 rpc 框架。通过弹性设计保障了大并发服务端的稳定性,经受了充分的实战检验。
go-zero 包含极简的 API 定义和生成工具 goctl可以根据定义的 api 文件一键生成 Go, iOS, Android, Kotlin, Dart, TypeScript, JavaScript 代码,并可直接运行。
## go-zero作者
<details>
<summary>万俊峰(kevwan)</summary>
<div>
七牛云技术副总裁拥有14年研发团队管理经验16年架构设计经验20年工程实战经验负责过多个大型项目的架构设计曾多次合伙创业被收购阿里云MVPArchSummit全球架构师峰会明星讲师GopherChina大会主持人 & 金牌讲师QCon+ Go语言出品人兼讲师腾讯云开发者大会讲师。
</div>
</details>
## go-zero社区
我们目前拥有7000多人的社区成员在这里你可以和大家讨论任何关于go-zero的技术问题反馈获取最新的go-zero信息以及各位大佬每天分享的技术心得。
## go-zero社区群
<img src="https://raw.githubusercontent.com/tal-tech/zero-doc/main/doc/images/wechat.jpg" width="300" alt="社区群"/>

198
website/docs/intro/brief.md Normal file
View File

@ -0,0 +1,198 @@
---
sidebar_position: 0
---
# 简介
## go-zero 介绍
go-zero 是一个集成了各种工程实践的 web 和 rpc 框架。通过弹性设计保障了大并发服务端的稳定性,经受了充分的实战检验。
go-zero 包含极简的 API 定义和生成工具 goctl可以根据定义的 api 文件一键生成 Go, iOS, Android, Kotlin, Dart, TypeScript, JavaScript 代码,并可直接运行。
使用 go-zero 的好处:
- :white_check_mark: 轻松获得支撑千万日活服务的稳定性
- :white_check_mark: 内建级联超时控制、限流、自适应熔断、自适应降载等微服务治理能力,无需配置和额外代码
- :white_check_mark: 微服务治理中间件可无缝集成到其它现有框架使用
- :white_check_mark: 极简的 API 描述,一键生成各端代码
- :white_check_mark: 自动校验客户端请求参数合法性
- :white_check_mark: 大量微服务治理和并发工具包
<img src="https://gitee.com/kevwan/static/raw/master/doc/images/architecture.png" alt="架构图" width="1500" />
## go-zero 框架背景
18 年初,我们决定从 `Java+MongoDB` 的单体架构迁移到微服务架构,经过仔细思考和对比,我们决定:
* 基于 Go 语言
* 高效的性能
* 简洁的语法
* 广泛验证的工程效率
* 极致的部署体验
* 极低的服务端资源成本
* 自研微服务框架
* 有过很多微服务框架自研经验
* 需要有更快速的问题定位能力
* 更便捷的增加新特性
## go-zero 框架设计思考
对于微服务框架的设计,我们期望保障微服务稳定性的同时,也要特别注重研发效率。所以设计之初,我们就有如下一些准则:
* 保持简单,第一原则
* 弹性设计,面向故障编程
* 工具大于约定和文档
* 高可用
* 高并发
* 易扩展
* 对业务开发友好,封装复杂度
* 约束做一件事只有一种方式
我们经历不到半年时间,彻底完成了从 `Java+MongoDB``Golang+MySQL` 为主的微服务体系迁移,并于 18 年 8 月底完全上线,稳定保障了业务后续迅速增长,确保了整个服务的高可用。
## go-zero 项目实现和特点
go-zero 是一个集成了各种工程实践的包含 web 和 rpc 框架,有如下主要特点:
* 强大的工具支持,尽可能少的代码编写
* 极简的接口
* 完全兼容 net/http
* 支持中间件,方便扩展
* 高性能
* 面向故障编程,弹性设计
* 内建服务发现、负载均衡
* 内建限流、熔断、降载,且自动触发,自动恢复
* API 参数自动校验
* 超时级联控制
* 自动缓存控制
* 链路跟踪、统计报警等
* 高并发支撑,稳定保障了疫情期间每天的流量洪峰
如下图,我们从多个层面保障了整体服务的高可用:
![弹性设计](https://gitee.com/kevwan/static/raw/master/doc/images/resilience.jpg)
觉得不错的话,别忘 **star** 👏
## Installation
在项目目录下通过如下命令安装:
```md
GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/tal-tech/go-zero
```
## Quick Start
#### 完整示例请查看
[快速构建高并发微服务](https://github.com/tal-tech/zero-doc/blob/main/doc/shorturl.md)
[快速构建高并发微服务 - 多 RPC 版](https://github.com/tal-tech/zero-doc/blob/main/docs/zero/bookstore.md)
#### 安装 `goctl` 工具
`goctl` 读作 `go control`,不要读成 `go C-T-L`。`goctl` 的意思是不要被代码控制,而是要去控制它。其中的 `go` 不是指 `golang`。在设计 `goctl` 之初,我就希望通过 ` 她 `
来解放我们的双手👈
```shell
GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/tal-tech/go-zero/tools/goctl
```
如果使用 go1.16 版本, 可以使用 `go install` 命令安装
```shell
GOPROXY=https://goproxy.cn/,direct go install github.com/tal-tech/go-zero/tools/goctl@latest
```
确保 `goctl` 可执行
#### 快速生成 api 服务
```shell
goctl api new greet
cd greet
go mod init
go mod tidy
go run greet.go -f etc/greet-api.yaml
```
默认侦听在 `8888` 端口(可以在配置文件里修改),可以通过 curl 请求:
```shell
curl -i http://localhost:8888/from/you
```
返回如下:
```http
HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 22 Oct 2020 14:03:18 GMT
Content-Length: 14
{"message":""}
```
编写业务代码:
* api 文件定义了服务对外暴露的路由
* 可以在 servicecontext.go 里面传递依赖给 logic比如 mysql, redis 等
* 在 api 定义的 get/post/put/delete 等请求对应的 logic 里增加业务处理逻辑
#### 可以根据 api 文件生成前端需要的 Java, TypeScript, Dart, JavaScript 代码
```shell
goctl api java -api greet.api -dir greet
goctl api dart -api greet.api -dir greet
...
```
## Benchmark
![benchmark](https://gitee.com/kevwan/static/raw/master/doc/images/benchmark.png)
[测试代码见这里](https://github.com/smallnest/go-web-framework-benchmark)
* awesome 系列(更多文章见『微服务实践』公众号)
* [快速构建高并发微服务](https://github.com/tal-tech/zero-doc/blob/main/doc/shorturl.md)
* [快速构建高并发微服务 - 多 RPC 版](https://github.com/tal-tech/zero-doc/blob/main/docs/zero/bookstore.md)
* 精选 `goctl` 插件
<table>
<tr>
<td>插件 </td> <td>用途 </td>
</tr>
<tr>
<td><a href="https://github.com/zeromicro/goctl-swagger">goctl-swagger</a></td> <td>一键生成 <code>api</code><code>swagger</code> 文档 </td>
</tr>
<tr>
<td><a href="https://github.com/zeromicro/goctl-android">goctl-android</a></td> <td> 生成 <code>java (android)</code><code>http client</code> 请求代码</td>
</tr>
<tr>
<td><a href="https://github.com/zeromicro/goctl-go-compact">goctl-go-compact</a> </td> <td>合并 <code>api</code> 里同一个 <code>group</code> 里的 <code>handler</code> 到一个 go 文件</td>
</tr>
</table>
## 微信公众号
`go-zero` 相关文章都会在 `微服务实践` 公众号整理呈现,欢迎扫码关注,也可以通过公众号私信我 👏
<img src="https://zeromicro.github.io/go-zero-pages/resource/go-zero-practise.png" alt="wechat" width="300" />
## 微信交流群
如果文档中未能覆盖的任何疑问,欢迎您在群里提出,我们会尽快答复。
您可以在群内提出使用中需要改进的地方,我们会考虑合理性并尽快修改。
如果您发现 ***bug*** 请及时提 ***issue***,我们会尽快确认并修改。
为了防止广告用户、识别技术同行,请 ***star*** 后加我时注明 **github** 当前 ***star*** 数,我再拉进 **go-zero** 群,感谢!
加我之前有劳点一下 ***star***,一个小小的 ***star*** 是作者们回答海量问题的动力🤝
<img src="https://raw.githubusercontent.com/tal-tech/zero-doc/main/doc/images/wechat.jpg" alt="wechat" width="300" />

View File

@ -0,0 +1,68 @@
---
sidebar_position: 2
---
# 加入我们
## 概要
<img src="/img/go-zero.png" alt="go-zero" width="100px" height="100px" align="right" />
[go-zero](https://github.com/zeromicro/go-zero) 是一个基于[MIT License](https://github.com/zeromicro/go-zero/blob/master/LICENSE) 的开源项目大家在使用中发现bug有新的特性等均可以参与到go-zero的贡献中来我们非常欢迎大家的积极参与也会最快响应大家提出的各种问题pr等。
## 贡献形式
* [Pull Request](https://github.com/zeromicro/go-zero/pulls)
* [Issue](https://github.com/zeromicro/go-zero/issues)
:::tip 贡献须知
go-zero 的Pull request中的代码需要满足一定规范
* 以英文注释为主
* pr时备注好功能特性描述需要清晰简洁
* 增加单元测试覆盖率达80%+
:::
## 贡献代码pr
* 进入[go-zero](https://github.com/zeromicro/go-zero) 项目fork一份[go-zero](https://github.com/zeromicro/go-zero) 项目到自己的github仓库中。
* 回到自己的github主页找到`xx/go-zero`项目其中xx为你的用户名如`anqiansong/go-zero`
![fork](/img/fork.png)
* 克隆代码到本地
![clone](/img/clone.png)
* 开发代码push到自己的github仓库
* 进入自己的github中go-zero项目点击浮层上的的`【Pull requests】`进入Compare页面。
![pr](/img/new_pr.png)
* `base repository`选择`tal-tech/go-zero` `base:master`,`head repository`选择`xx/go-zero` `compare:$branch` `$branch`为你开发的分支,如图:
![pr](/img/compare.png)
* 点击`【Create pull request】`即可实现pr申请
* 确认pr是否提交成功进入[go-zero](https://github.com/zeromicro/go-zero) 的[Pull requests](https://github.com/zeromicro/go-zero/pulls) 查看,应该有自己提交的记录,名称为你的开发时的分支名称
![pr record](/img/pr_record.png)
## Issue
在我们的社区中有很多伙伴会积极的反馈一些go-zero使用过程中遇到的问题由于社区人数较多我们虽然会实时的关注社区动态但大家问题反馈过来都是随机的当我们团队还在解决某一个伙伴提出的问题时另外的问题也反馈上来可能会导致团队会很容易忽略掉为了能够一一的解决大家的问题我们强烈建议大家通过issue的方式来反馈问题包括但不限于bug期望的新功能特性等我们在实现某一个新特性时也会在issue中体现大家在这里也能够在这里获取到go-zero的最新动向也欢迎大家来积极的参与讨论。
### 怎么提Issue
* 点击[这里](https://github.com/zeromicro/go-zero/issues) 进入go-zero的Issue页面或者直接访问[https://github.com/zeromicro/go-zero/issues](https://github.com/zeromicro/go-zero/issues) 地址
* 点击右上角的`【New issue】`新建issue
* 填写issue标题和内容
* 点击`【Submit new issue】`提交issue
## 文档贡献
文档仓库[`go-zero.dev`](https://github.com/zeromicro/go-zero.dev),使用[docusaurus](https://docusaurus.io)构建文档变更合并主分支后会自动触发Github Actions进行自动部署。
### 新增/修改文档
首先fork文档仓库并clone自己的仓库到本地然后在docs目录中对应的子目录下新增修改文档文档格式为Markdown并支持一些扩展语法具体支持的语法请参考[Docusaurus: Markdown Features](https://docusaurus.io/docs/markdown-features)
### 提交pr
新增/修改完文档后即可提交pr等待团队合并文档。
## 参考文档
* [Github Pull request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/proposing-changes-to-your-work-with-pull-requests)

View File

@ -0,0 +1,4 @@
{
"label": "其他组件",
"position": 4
}

View File

@ -0,0 +1,159 @@
---
sidebar_position: 1
---
# go-queue
延迟队列:一种带有 延迟功能 的消息队列
- 延时 → 未来一个不确定的时间
- mq → 消费行为具有顺序性
这样解释,整个设计就清楚了。你的目的是 延时,承载容器是 mq。
### 背景
列举一下我日常业务中可能存在的场景:
- 建立延时日程,需要提醒老师上课
- 延时推送 → 推送老师需要的公告以及作业
为了解决以上问题,最简单直接的办法就是定时去扫表:
:::info
服务启动时,开启一个异步协程 → 定时扫描 msg table到了事件触发事件调用对应的 handler
:::
几个缺点:
- 每一个需要定时/延时任务的服务,都需要一个 msg table 做额外存储 → 存储与业务耦合
- 定时扫描 → 时间不好控制,可能会错过触发时间
- 对 msg table instance 是一个负担。反复有一个服务不断对数据库产生持续不断的压力
最大问题其实是什么?
调度模型基本统一,不要做重复的业务逻辑
我们可以考虑将逻辑从具体的业务逻辑里面抽出来,变成一个公共的部分。
而这个调度模型,就是 延时队列 。
其实说白了:
延时队列模型,就是将未来执行的事件提前存储好,然后不断扫描这个存储,触发执行时间则执行对应的任务逻辑。
那么开源界是否已有现成的方案呢答案是肯定的。Beanstalk (https://github.com/beanstalkd/beanstalkd) 它基本上已经满足以上需求
### 设计目的
- 消费行为 at least
- 高可用
- 实时性
- 支持消息删除
一次说说上述这些目的的设计方向:
#### 消费行为
这个概念取自 mq 。mq 中提供了消费投递的几个方向:
- at most once → 至多一次,消息可能会丢,但不会重复
- at least once → 至少一次,消息肯定不会丢失,但可能重复
- exactly once → 有且只有一次,消息不丢失不重复,且只消费一次。
exactly once 尽可能是 producer + consumer 两端都保证。当 producer 没办法保证是,那 consumer 需要在消费前做一个去重,达到消费过一次不会重复消费,这个在延迟队列内部直接保证。
最简单:使用 redis 的 setNX 达到 job id 的唯一消费
#### 高可用
支持多实例部署。挂掉一个实例后,还有后备实例继续提供服务。
这个对外提供的 API 使用 cluster 模型,内部将多个 node 封装起来,多个 node 之间冗余存储。
#### 为什么不使用 Kafka
考虑过类似基于 kafka/rocketmq 等消息队列作为存储的方案,最后从存储设计模型放弃了这类选择。
举个例子,假设以 Kafka 这种消息队列存储来实现延时功能,每个队列的时间都需要创建一个单独的 topic(如: Q1-1s, Q1-2s..)。这种设计在延时时间比较固定的场景下问题不太大,但如果是延时时间变化比较大会导致 topic 数目过多,会把磁盘从顺序读写会变成随机读写从导致性能衰减,同时也会带来其他类似重启或者恢复时间过长的问题。
- topic 过多 → 存储压力
- topic 存储的是现实时间,在调度时对不同时间 (topic) 的读取,顺序读 → 随机读
- 同理,写入的时候顺序写 → 随机写
### 架构设计
![dq](/img/dq.png)
### API 设计
producer
- producer.At(msg []byte, at time.Time)
- producer.Delay(body []byte, delay time.Duration)
- producer.Revoke(ids string)
consumer
- consumer.Consume(consume handler)
使用延时队列后,服务整体结构如下,以及队列中 job 的状态变迁:
![delay queue](/img/delay-queue.png)
- service → producer.At(msg []byte, at time.Time) → 插入延时job到 tube 中
- 定时触发 → job 状态更新为 ready
- consumer 获取到 ready job → 取出 job开始消费并更改状态为 reserved
- 执行传入 consumer 中的 handler 逻辑处理函数
### 生产实践
主要介绍一下在日常开发,我们使用到延时队列的哪些具体功能。
#### 生产端
- 开发中生产延时任务,只需确定任务执行时间
- 传入 At() producer.At(msg []byte, at time.Time)
- 内部会自行计算时间差值,插入 tube
- 如果出现任务时间的修改,以及任务内容的修改
- 在生产时可能需要额外建立一个 logic_id → job_id 的关系表
- 查询到 job_id → producer.Revoke(ids string) ,对其删除,然后重新插入
#### 消费端
首先,框架层面保证了消费行为的 exactly once ,但是上层业务逻辑消费失败或者是出现网络问题,亦或者是各种各样的问题,导致消费失败,兜底交给业务开发做。这样做的原因:
- 框架以及基础组件只保证 job 状态的流转正确性
- 框架消费端只保证消费行为的统一
- 延时任务在不同业务中行为不统一
- 强调任务的必达性,则消费失败时需要不断重试直到任务成功
- 强调任务的准时性,则消费失败时,对业务不敏感则可以选择丢弃
这里描述一下框架消费端是怎么保证消费行为的统一:
分为 cluster 和 node。cluster
`https://github.com/tal-tech/go-queue/blob/master/dq/consumer.go#L45`
- cluster 内部将 consume handler 做了一层再封装
- 对 consume body 做hash并使用此 hash 作为 redis 去重的key
- 如果存在,则不做处理,丢弃
#### node
`https://github.com/tal-tech/go-queue/blob/master/dq/consumernode.go#L36`
- 消费 node 获取到 ready job先执行 Reserve(TTR)预订此job将执行该job进行逻辑处理
- 在 node 中 delete(job);然后再进行消费
- 如果失败,则上抛给业务层,做相应的兜底重试
所以对于消费端,开发者需要自己实现消费的幂等性。
![idempotent](/img/idempotent.png)
### 使用示例
[使用示例](https://github.com/zeromicro/go-queue/tree/master/example)

View File

@ -0,0 +1,460 @@
---
sidebar_position: 2
---
# mapReduce
### 为什么需要 MapReduce
在实际的业务场景中我们常常需要从不同的 rpc 服务中获取相应属性来组装成复杂对象。
比如要查询商品详情:
- 商品服务-查询商品属性
- 库存服务-查询库存属性
- 价格服务-查询价格属性
- 营销服务-查询营销属性
如果是串行调用的话响应时间会随着 rpc 调用次数呈线性增长,所以我们要优化性能一般会将串行改并行。
简单的场景下使用 waitGroup 也能够满足需求,但是如果我们需要对 rpc 调用返回的数据进行校验、数据加工转换、数据汇总呢?继续使用 waitGroup 就有点力不从心了go 的官方库中并没有这种工具java 中提供了 CompleteFuturego-zero 作者依据 mapReduce 架构思想实现了进程内的数据批处理 mapReduce 并发工具类。
### 设计思路
我们尝试把自己代入到作者的角色梳理一下并发工具可能的业务场景:
- 查询商品详情:支持并发调用多个服务来组合产品属性,支持调用错误可以立即结束。
- 商品详情页自动推荐用户卡券:支持并发校验卡券,校验失败自动剔除,返回全部卡券。
以上实际都是在进行对输入数据进行处理最后输出清洗后的数据,针对数据处理有个非常经典的异步模式:生产者消费者模式。于是我们可以抽象一下数据批处理的生命周期,大致可以分为三个阶段:
![three stage](/img/three-stage.png)
- 数据生产 generate
- 数据加工 mapper
- 数据聚合 reducer
其中数据生产是不可或缺的阶段,数据加工、数据聚合是可选阶段,数据生产与加工支持并发调用,数据聚合基本属于纯内存操作单协程即可。
再来思考一下不同阶段之间数据应该如何流转,既然不同阶段的数据处理都是由不同 goroutine 执行的,那么很自然的可以考虑采用 channel 来实现 goroutine 之间的通信。
![flow](/img/flow.png)
如何实现随时终止流程呢?
很简单goroutine 中监听一个全局的结束 channel 就行。
### go-zero 代码实现
`core/mr/mapreduce.go`
详细源码可查看 https://github.com/Ouyangan/go-zero-annotation/blob/24a5753f19a6a18fc05615cb019ad809aab54232/core/mr/mapreduce.go
### 前置知识 - channel 基本用法
因为 MapReduce 源码中大量使用 channel 进行通信,大概提一下 channel 基本用法:
channel 写结束后记得关闭
```go
ch := make(chan interface{})
// 写入完毕需要主动关闭channel
defer func() {
close(ch)
}()
go func() {
// v,ok模式 读取channel
for {
v, ok := <-ch
if !ok {
return
}
t.Log(v)
}
// for range模式读取channelchannel关闭循环自动退出
for i := range ch {
t.Log(i)
}
// 清空channelchannel关闭循环自动退出
for range ch {
}
}()
for i := 0; i < 10; i++ {
ch <- i
time.Sleep(time.Second)
}
````
已关闭的 channel 依然支持读取
限定 channel 读写权限
```go
// 只读channel
func readChan(rch <-chan interface{}) {
for i := range rch {
log.Println(i)
}
}
// 只写channel
func writeChan(wch chan<- interface{}) {
wch <- 1
}
```
### 接口定义
先来看最核心的三个函数定义:
- 数据生产
- 数据加工
- 数据聚合
```go
// 数据生产func
// source - 数据被生产后写入source
GenerateFunc func(source chan<- interface{})
// 数据加工func
// item - 生产出来的数据
// writer - 调用writer.Write()可以将加工后的向后传递至reducer
// cancel - 终止流程func
MapperFunc func(item interface{}, writer Writer, cancel func(error))
// 数据聚合func
// pipe - 加工出来的数据
// writer - 调用writer.Write()可以将聚合后的数据返回给用户
// cancel - 终止流程func
ReducerFunc func(pipe <-chan interface{}, writer Writer, cancel func(error))
````
### 面向用户的方法定义
使用方法可以查看官方文档,这里不做赘述
面向用户的方法比较多,方法主要分为两大类:
- 无返回
- 执行过程发生错误立即终止
- 执行过程不关注错误
- 有返回值
- 手动写入 source手动读取聚合数据 channel
- 手动写入 source自动读取聚合数据 channel
- 外部传入 source自动读取聚合数据 channel
```go
// 并发执行func发生任何错误将会立即终止流程
func Finish(fns ...func() error) error
// 并发执行func即使发生错误也不会终止流程
func FinishVoid(fns ...func())
// 需要用户手动将生产数据写入 source加工数据后返回一个channel供读取
// opts - 可选参数,目前包含:数据加工阶段协程数量
func Map(generate GenerateFunc, mapper MapFunc, opts ...Option)
// 无返回值,不关注错误
func MapVoid(generate GenerateFunc, mapper VoidMapFunc, opts ...Option)
// 无返回值,关注错误
func MapReduceVoid(generate GenerateFunc, mapper MapperFunc, reducer VoidReducerFunc, opts ...Option)
// 需要用户手动将生产数据写入 source ,并返回聚合后的数据
// generate 生产
// mapper 加工
// reducer 聚合
// opts - 可选参数,目前包含:数据加工阶段协程数量
func MapReduce(generate GenerateFunc, mapper MapperFunc, reducer ReducerFunc, opts ...Option) (interface{}, error)
// 支持传入数据源channel并返回聚合后的数据
// source - 数据源channel
// mapper - 读取source内容并处理
// reducer - 数据处理完毕发送至reducer聚合
func MapReduceWithSource(source <-chan interface{}, mapper MapperFunc, reducer ReducerFunc,
opts ...Option) (interface{}, error)
```
核心方法是 MapReduceWithSource 和 Map其他方法都在内部调用她俩。弄清楚了 MapReduceWithSource 方法 Map 也不在话下。
### MapReduceWithSource 源码实现
一切都在这张图里面了
![mapreduce](/img/mapreduce.png)
```go
// 支持传入数据源channel并返回聚合后的数据
// source - 数据源channel
// mapper - 读取source内容并处理
// reducer - 数据处理完毕发送至reducer聚合
func MapReduceWithSource(source <-chan interface{}, mapper MapperFunc, reducer ReducerFunc,
opts ...Option) (interface{}, error) {
// 可选参数设置
options := buildOptions(opts...)
// 聚合数据channel需要手动调用write方法写入到output中
output := make(chan interface{})
// output最后只会被读取一次
defer func() {
// 如果有多次写入的话则会造成阻塞从而导致协程泄漏
// 这里用 for range检测是否可以读出数据读出数据说明多次写入了
// 为什么这里使用panic呢显示的提醒用户用法错了会比自动修复掉好一些
for range output {
panic("more than one element written in reducer")
}
}()
// 创建有缓冲的chan容量为workers
// 意味着最多允许 workers 个协程同时处理数据
collector := make(chan interface{}, options.workers)
// 数据聚合任务完成标志
done := syncx.NewDoneChan()
// 支持阻塞写入chan的writer
writer := newGuardedWriter(output, done.Done())
// 单例关闭
var closeOnce sync.Once
var retErr errorx.AtomicError
// 数据聚合任务已结束,发送完成标志
finish := func() {
// 只能关闭一次
closeOnce.Do(func() {
// 发送聚合任务完成信号close函数将会向chan写入一个零值
done.Close()
// 关闭数据聚合chan
close(output)
})
}
// 取消操作
cancel := once(func(err error) {
// 设置error
if err != nil {
retErr.Set(err)
} else {
retErr.Set(ErrCancelWithNil)
}
// 清空source channel
drain(source)
// 调用完成方法
finish()
})
go func() {
defer func() {
// 清空聚合任务channel
drain(collector)
// 捕获panic
if r := recover(); r != nil {
// 调用cancel方法立即结束
cancel(fmt.Errorf("%v", r))
} else {
// 正常结束
finish()
}
}()
// 执行数据加工
// 注意writer.write将加工后数据写入了output
reducer(collector, writer, cancel)
}()
// 异步执行数据加工
// source - 数据生产
// collector - 数据收集
// done - 结束标志
// workers - 并发数
go executeMappers(func(item interface{}, w Writer) {
mapper(item, w, cancel)
}, source, collector, done.Done(), options.workers)
// reducer将加工后的数据写入了output
// 需要数据返回时读取output即可
// 假如output被写入了超过两次
// 则开始的defer func那里将还可以读到数据
// 由此可以检测到用户调用了多次write方法
value, ok := <-output
if err := retErr.Load(); err != nil {
return nil, err
} else if ok {
return value, nil
} else {
return nil, ErrReduceNoOutput
}
}
````
```go
// 数据加工
func executeMappers(mapper MapFunc, input <-chan interface{}, collector chan<- interface{},
done <-chan lang.PlaceholderType, workers int) {
// goroutine协调同步信号量
var wg sync.WaitGroup
defer func() {
// 等待数据加工任务完成
// 防止数据加工的协程还未处理完数据就直接退出了
wg.Wait()
// 关闭数据加工channel
close(collector)
}()
// 带缓冲区的channel缓冲区大小为workers
// 控制数据加工的协程数量
pool := make(chan lang.PlaceholderType, workers)
// 数据加工writer
writer := newGuardedWriter(collector, done)
for {
select {
// 监听到外部结束信号,直接结束
case <-done:
return
// 控制数据加工协程数量
// 缓冲区容量-1
// 无容量时将会被阻塞,等待释放容量
case pool <- lang.Placeholder:
// 阻塞等待生产数据channel
item, ok := <-input
// 如果ok为false则说明input已被关闭或者清空
// 数据加工完成,执行退出
if !ok {
// 缓冲区容量+1
<-pool
// 结束本次循环
return
}
// wg同步信号量+1
wg.Add(1)
// better to safely run caller defined method
// 异步执行数据加工防止panic错误
threading.GoSafe(func() {
defer func() {
// wg同步信号量-1
wg.Done()
// 缓冲区容量+1
<-pool
}()
mapper(item, writer)
})
}
}
}
```
### 使用示例
```go
package main
import (
"log"
"time"
"github.com/tal-tech/go-zero/core/mr"
"github.com/tal-tech/go-zero/core/timex"
)
type user struct{}
func (u *user) User(uid int64) (interface{}, error) {
time.Sleep(time.Millisecond * 30)
return nil, nil
}
type store struct{}
func (s *store) Store(pid int64) (interface{}, error) {
time.Sleep(time.Millisecond * 50)
return nil, nil
}
type order struct{}
func (o *order) Order(pid int64) (interface{}, error) {
time.Sleep(time.Millisecond * 40)
return nil, nil
}
var (
userRpc user
storeRpc store
orderRpc order
)
func main() {
start := timex.Now()
_, err := productDetail(123, 345)
if err != nil {
log.Printf("product detail error: %v", err)
return
}
log.Printf("productDetail time: %v", timex.Since(start))
// the data processing
res, err := checkLegal([]int64{1, 2, 3})
if err != nil {
log.Printf("check error: %v", err)
return
}
log.Printf("check res: %v", res)
}
type ProductDetail struct {
User interface{}
Store interface{}
Order interface{}
}
func productDetail(uid, pid int64) (*ProductDetail, error) {
var pd ProductDetail
err := mr.Finish(func() (err error) {
pd.User, err = userRpc.User(uid)
return
}, func() (err error) {
pd.Store, err = storeRpc.Store(pid)
return
}, func() (err error) {
pd.Order, err = orderRpc.Order(pid)
return
})
if err != nil {
return nil, err
}
return &pd, nil
}
func checkLegal(uids []int64) ([]int64, error) {
r, err := mr.MapReduce(func(source chan<- interface{}) {
for _, uid := range uids {
source <- uid
}
}, func(item interface{}, writer mr.Writer, cancel func(error)) {
uid := item.(int64)
ok, err := check(uid)
if err != nil {
cancel(err)
}
if ok {
writer.Write(uid)
}
}, func(pipe <-chan interface{}, writer mr.Writer, cancel func(error)) {
var uids []int64
for p := range pipe {
uids = append(uids, p.(int64))
}
writer.Write(uids)
})
if err != nil {
return nil, err
}
return r.([]int64), nil
}
func check(uid int64) (bool, error) {
// do something check user legal
time.Sleep(time.Millisecond * 20)
return true, nil
}
```
[更多示例](https://github.com/zeromicro/zero-examples/tree/main/mapreduce)

View File

@ -0,0 +1,4 @@
{
"label": "问题汇总",
"position": 5
}

View File

@ -0,0 +1,5 @@
---
sidebar_position: 2
---
# 社区问题

View File

@ -0,0 +1,83 @@
---
sidebar_position: 1
---
# 常见问题
### windows上报错
```text
A required privilege is not held by the client.
```
解决方法:"以管理员身份运行" goctl 即可。
### grpc引起错误
* 错误一
```text
protoc-gen-go: unable to determine Go import path for "greet.proto"
Please specify either:
• a "go_package" option in the .proto source file, or
• a "M" argument on the command line.
See https://developers.google.com/protocol-buffers/docs/reference/go-generated#package for more information.
--go_out: protoc-gen-go: Plugin failed with status code 1.
```
解决方法:
```text
go get -u github.com/golang/protobuf/protoc-gen-go@v1.3.2
```
### protoc-gen-go安装失败
```text
go get github.com/golang/protobuf/protoc-gen-go: module github.com/golang/protobuf/protoc-gen-go: Get "https://proxy.golang.org/github.com/golang/protobuf/protoc-gen-go/@v/list": dial tcp 216.58.200.49:443: i/o timeout
```
请确认`GOPROXY`已经设置
### api服务启动失败
```text
error: config file etc/user-api.yaml, error: type mismatch for field xx
```
请确认`user-api.yaml`配置文件中配置项是否已经配置如果有值检查一下yaml配置文件是否符合yaml格式。
### goctl找不到
```
command not found: goctl
```
请确保goctl已经安装或者goctl是否已经添加到环境变量
### goctl已安装却提示 `command not found: goctl`
如果你通过 `go get` 方式安装,那么 `goctl` 应该位于 `$GOPATH` 中,你可以通过 `go env GOPATH` 查看完整路径,不管你的 `goctl` 是在 `$GOPATH`中,还是在其他目录,出现上述问题的原因就是 `goctl` 所在目录不在 `PATH` (环境变量)中所致。
### proto使用了importgoctl命令需要怎么写
`goctl` 对于import的proto指定 `BasePath` 提供了 `protoc` 的flag映射`--proto_path, -I``goctl` 会将此flag值传递给 `protoc`.
### 假设 `base.proto` 的被main proto 引入了,为什么不生能生成`base.pb.go`
对于 `base.proto` 这种类型的文件一般都是开发者有message复用的需求他的来源不止有开发者自己编写的`proto`文件,还有可能来源于 `google.golang.org/grpc` 中提供的一些基本的proto,比如 `google/protobuf/any.proto`, 如果由 `goctl`来生成那么就失去了集中管理这些proto的意义。
### model怎么控制缓存时间
`sqlc.NewNodeConn` 的时候可以通过可选参数 `cache.WithExpiry` 传递如缓存时间控制为1天代码如下:
```go
sqlc.NewNodeConn(conn,redis,cache.WithExpiry(24*time.Hour))
```
### 怎么关闭输出的统计日志(stat)
```go
logx.DisableStat()
```
### rpc直连与服务发现连接模式写法
```go
// mode1: 集群直连
// conf:=zrpc.NewDirectClientConf([]string{"ip:port"},"app","token")
// mode2: etcd 服务发现
// conf:=zrpc.NewEtcdClientConf([]string{"ip:port"},"key","app","token")
// client, _ := zrpc.NewClient(conf)
// mode3: ip直连mode
// client, _ := zrpc.NewClientWithTarget("127.0.0.1:8888")
```

View File

@ -0,0 +1,4 @@
{
"label": "快速开始",
"position": 1
}

View File

@ -0,0 +1,84 @@
---
sidebar_position: 3
---
# 构建API服务
### 创建greet服务
```shell
$ cd ~/go-zero-demo
$ go mod init go-zero-demo
$ goctl api new greet
Done.
```
查看一下`greet`服务的结构
```shell
$ cd greet
$ tree
```
```text
.
├── etc
│   └── greet-api.yaml
├── greet.api
├── greet.go
└── internal
├── config
│   └── config.go
├── handler
│   ├── greethandler.go
│   └── routes.go
├── logic
│   └── greetlogic.go
├── svc
│   └── servicecontext.go
└── types
└── types.go
```
由以上目录结构可以观察到,`greet`服务虽小,但"五脏俱全"。接下来我们就可以在`greetlogic.go`中编写业务代码了。
### 编写逻辑
```go title="$ vim ~/go-zero-demo/greet/internal/logic/greetlogic.go"
func (l *GreetLogic) Greet(req types.Request) (*types.Response, error) {
return &types.Response{
Message: "Hello go-zero",
}, nil
}
```
### 启动并访问服务
* 启动服务
```shell
$ cd ~/go-zero-demo/greet
$ go run greet.go -f etc/greet-api.yaml
```
```text
Starting server at 0.0.0.0:8888...
```
* 访问服务
```shell
$ curl -i -X GET \
http://localhost:8888/from/you
```
```text
HTTP/1.1 200 OK
Content-Type: application/json
Date: Sun, 07 Feb 2021 04:31:25 GMT
Content-Length: 27
{"message":"Hello go-zero"}
```
### 源码
[greet源码](https://github.com/zeromicro/go-zero-demo/tree/master/greet)

View File

@ -0,0 +1,150 @@
---
sidebar_position: 4
---
# 构建RPC服务
### 创建user rpc服务
* 创建user rpc服务
```shell
$ cd ~/go-zero-demo/mall
$ mkdir -p user/rpc && cd user/rpc
```
* 添加`user.proto`文件,增加`getUser`方法
```protobuf title="$ vim ~/go-zero-demo/mall/user/rpc/user.proto"
syntax = "proto3";
package user;
//protoc-gen-go 版本大于1.4.0, proto文件需要加上go_package,否则无法生成
option go_package = "./user";
message IdRequest {
string id = 1;
}
message UserResponse {
// 用户id
string id = 1;
// 用户名称
string name = 2;
// 用户性别
string gender = 3;
}
service User {
rpc getUser(IdRequest) returns(UserResponse);
}
```
* 生成代码
```shell
$ cd ~/go-zero-demo/mall/user/rpc
$ goctl rpc template -o user.proto
$ goctl rpc proto -src user.proto -dir .
[goclt version <=1.2.1] protoc -I=/Users/xx/mall/user user.proto --goctl_out=plugins=grpc:/Users/xx/mall/user/user
[goctl version > 1.2.1] protoc -I=/Users/xx/mall/user user.proto --go_out=plugins=grpc:/Users/xx/mall/user/user
Done.
```
:::tip protoc-gen-go版本
如果安装的 `protoc-gen-go` 版大于1.4.0, proto文件建议加上`go_package`
:::
* 填充业务逻辑
```shell
$ vim internal/logic/getuserlogic.go
```
```go
package logic
import (
"context"
"go-zero-demo/mall/user/rpc/internal/svc"
"go-zero-demo/mall/user/rpc/user"
"github.com/tal-tech/go-zero/core/logx"
)
type GetUserLogic struct {
ctx context.Context
svcCtx *svc.ServiceContext
logx.Logger
}
func NewGetUserLogic(ctx context.Context, svcCtx *svc.ServiceContext) *GetUserLogic {
return &GetUserLogic{
ctx: ctx,
svcCtx: svcCtx,
Logger: logx.WithContext(ctx),
}
}
func (l *GetUserLogic) GetUser(in *user.IdRequest) (*user.UserResponse, error) {
return &user.UserResponse{
Id: "1",
Name: "test",
}, nil
}
```
* 修改配置
```shell
$ vim internal/config/config.go
```
```go
package config
import (
"github.com/tal-tech/go-zero/zrpc"
)
type Config struct {
zrpc.RpcServerConf
}
```
* 添加yaml配置
```shell
$ vim etc/user.yaml
```
```yaml
Name: user.rpc
ListenOn: 127.0.0.1:8080
Etcd:
Hosts:
- 127.0.0.1:2379
Key: user.rpc
```
* 修改目录文件
```shell
$ cd ~/go-zero-demo/mall/rpc
$ mkdir userclient && mv /user/user.go /userclient
```
### 启动服务并验证
:::tip etcd安装
[点此查看etcd安装教程](https://etcd.io/docs/v3.5/install/)
:::
* 启动etcd
```shell
$ etcd
```
* 启动user rpc
```shell
$ go run user.go -f etc/user.yaml
```
```text
Starting rpc server at 127.0.0.1:8080...

View File

@ -0,0 +1,9 @@
---
sidebar_position: 2
---
# 构建工具
`goctl` 读作 `go control`,不要读成 `go C-T-L`。`goctl` 的意思是不要被代码控制,而是要去控制它。其中的 `go` 不是指 `golang`。在设计 `goctl` 之初,我就希望通过 她 来解放我们的双手👈
### [goctl详情请参考](../build-tool/tool-intro.md)

View File

@ -0,0 +1,37 @@
---
sidebar_position: 1
---
# 概念介绍
### go-zero
集各种工程实践于一身的web和rpc框架。
### goctl
一个旨在为开发人员提高工程效率、降低出错率的辅助工具。
### goctl插件
指以goctl为中心的周边二进制资源能够满足一些个性化的代码生成需求如路由合并插件`goctl-go-compact`插件,
生成swagger文档的`goctl-swagger`插件生成php调用端的`goctl-php`插件等。
### intellij/vscode插件
在intellij系列产品上配合goctl开发的插件其将goctl命令行操作使用UI进行替代。
### api文件
api文件是指用于定义和描述api服务的文本文件其以.api后缀结尾包含api语法描述内容。
### goctl环境
goctl环境是使用goctl前的准备环境包含
* golang环境
* protoc
* protoc-gen-go插件
* go module | gopath
### go-zero-demo
go-zero-demo里面包含了文档中所有源码的一个大仓库后续我们在编写演示demo时我们均在此项目下创建子项目
因此我们需要提前创建一个大仓库`go-zero-demo`我这里把这个仓库放在home目录下。
```shell
$ cd ~
$ mkdir go-zero-demo&&cd go-zero-demo
$ go mod init go-zero-demo
```

View File

@ -0,0 +1,131 @@
// @ts-check
// Note: type annotations allow type checking and IDEs autocompletion
const lightCodeTheme = require('prism-react-renderer/themes/github');
const darkCodeTheme = require('prism-react-renderer/themes/dracula');
/** @type {import('@docusaurus/types').Config} */
const config = {
title: 'go-zero',
tagline: 'go-zero是一个集成了各种工程实践的web和rpc框架。通过弹性设计保障了大并发服务端的稳定性经受了充分的实战检验',
url: 'https://zeromicro.github.io',
baseUrl: '/zero-doc/',
onBrokenLinks: 'throw',
onBrokenMarkdownLinks: 'warn',
favicon: 'img/go-zero.svg',
organizationName: 'zeromicro', // Usually your GitHub org/user name.
projectName: 'zero-doc', // Usually your repo name.
presets: [
[
'classic',
/** @type {import('@docusaurus/preset-classic').Options} */
({
docs: {
sidebarPath: require.resolve('./sidebars.js'),
// Please change this to your repo.
editUrl: 'https://github.com/facebook/docusaurus/tree/main/packages/create-docusaurus/templates/shared/',
editUrl: undefined,
},
blog: {
showReadingTime: true,
// Please change this to your repo.
editUrl:
'https://github.com/facebook/docusaurus/tree/main/packages/create-docusaurus/templates/shared/',
},
theme: {
customCss: require.resolve('./src/css/custom.css'),
},
}),
],
],
themeConfig:
/** @type {import('@docusaurus/preset-classic').ThemeConfig} */
({
algolia: {
apiKey: '0d47915493f6871d9cef0dc511f7e64e',
indexName: 'go-zero',
},
navbar: {
title: 'Go-zero',
logo: {
alt: 'Go-zero Logo',
src: 'img/go-zero.png',
},
items: [
{
to: 'docs/intro/brief',
activeBasePath: 'docs',
position: 'left',
label: '文档',
},
{to: '/blog', label: '博客', position: 'left'},
{
type: 'localeDropdown',
position: 'right',
},
{
href: 'https://github.com/zeromicro/go-zero',
label: 'GitHub',
position: 'right',
},
],
},
footer: {
style: 'dark',
links: [
{
title: 'Docs',
items: [
{
label: 'docs',
to: '/docs/intro/brief',
},
],
},
{
title: 'Community',
items: [
{
label: 'Chat Group',
href: 'https://join.slack.com/t/go-zero/shared_invite/zt-10ruju779-BE4y6lQNB_R21samtyKTgA',
},
],
},
{
title: 'More',
items: [
{
label: 'Blog',
to: '/blog',
},
{
label: 'GitHub',
href: 'https://github.com/zeromicro/go-zero',
},
],
},
],
copyright: `Copyright © ${new Date().getFullYear()} go-zero.dev, Inc. Built with Docusaurus.`,
},
prism: {
theme: lightCodeTheme,
darkTheme: darkCodeTheme,
},
}),
i18n: {
defaultLocale: 'zh',
locales: ['zh', 'en'],
localeConfigs: {
zh: {
label: '中文',
},
en: {
label: 'English',
},
},
},
};
module.exports = config;

244
website/i18n/en/code.json Normal file
View File

@ -0,0 +1,244 @@
{
"开始体验吧": {
"message": "Get Started"
},
"稳定性": {
"message": "Stability"
},
"轻松获得支撑千万日活服务的稳定性": {
"message": "Easy to get the stability to support 10 million daily service"
},
"服务治理": {
"message": "Service Governance"
},
"内建级联超时控制、限流、自适应熔断、自适应降载等微服务治理能力,无需配置和额外代码": {
"message": "Built-in cascade timeout control, current limiting, adaptive fusing, adaptive load shedding and other microservice governance capabilities without configuration and additional code"
},
"可插拔": {
"message": "Pluggable"
},
"微服务治理中间件可无缝集成到其它现有框架使用": {
"message": "Microservice governance middleware can be seamlessly integrated with other existing frameworks"
},
"代码自动生成": {
"message": "Automatic code generation"
},
"极简的 API 描述,一键生成各端代码": {
"message": "Minimal API description, one-click code generation for each end"
},
"效验请求合法性": {
"message": "Legality of validation requests"
},
"自动校验客户端请求参数合法性": {
"message": "Automatic verification of the legitimacy of client request parameters"
},
"工具包": {
"message": "Toolkit"
},
"大量微服务治理和并发工具包": {
"message": "Extensive microservice governance and concurrency toolkit"
},
"go-zero是一个集成了各种工程实践的web和rpc框架。通过弹性设计保障了大并发服务端的稳定性经受了充分的实战检验": {
"message": "go-zero is a web and rpc framework with lots of builtin engineering practices. Its born to ensure the stability of the busy services with resilience design, and has been serving sites with tens of millions users for years"
},
"theme.ErrorPageContent.title": {
"message": "This page crashed.",
"description": "The title of the fallback page when the page crashed"
},
"theme.ErrorPageContent.tryAgain": {
"message": "Try again",
"description": "The label of the button to try again when the page crashed"
},
"theme.NotFound.title": {
"message": "Page Not Found",
"description": "The title of the 404 page"
},
"theme.NotFound.p1": {
"message": "We could not find what you were looking for.",
"description": "The first paragraph of the 404 page"
},
"theme.NotFound.p2": {
"message": "Please contact the owner of the site that linked you to the original URL and let them know their link is broken.",
"description": "The 2nd paragraph of the 404 page"
},
"theme.BackToTopButton.buttonAriaLabel": {
"message": "Scroll back to top",
"description": "The ARIA label for the back to top button"
},
"theme.AnnouncementBar.closeButtonAriaLabel": {
"message": "Close",
"description": "The ARIA label for close button of announcement bar"
},
"theme.blog.archive.title": {
"message": "Archive",
"description": "The page & hero title of the blog archive page"
},
"theme.blog.archive.description": {
"message": "Archive",
"description": "The page & hero description of the blog archive page"
},
"theme.blog.paginator.navAriaLabel": {
"message": "Blog list page navigation",
"description": "The ARIA label for the blog pagination"
},
"theme.blog.paginator.newerEntries": {
"message": "Newer Entries",
"description": "The label used to navigate to the newer blog posts page (previous page)"
},
"theme.blog.paginator.olderEntries": {
"message": "Older Entries",
"description": "The label used to navigate to the older blog posts page (next page)"
},
"theme.blog.post.readingTime.plurals": {
"message": "One min read|{readingTime} min read",
"description": "Pluralized label for \"{readingTime} min read\". Use as much plural forms (separated by \"|\") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)"
},
"theme.blog.post.readMore": {
"message": "Read More",
"description": "The label used in blog post item excerpts to link to full blog posts"
},
"theme.blog.post.paginator.navAriaLabel": {
"message": "Blog post page navigation",
"description": "The ARIA label for the blog posts pagination"
},
"theme.blog.post.paginator.newerPost": {
"message": "Newer Post",
"description": "The blog post button label to navigate to the newer/previous post"
},
"theme.blog.post.paginator.olderPost": {
"message": "Older Post",
"description": "The blog post button label to navigate to the older/next post"
},
"theme.blog.sidebar.navAriaLabel": {
"message": "Blog recent posts navigation",
"description": "The ARIA label for recent posts in the blog sidebar"
},
"theme.blog.post.plurals": {
"message": "One post|{count} posts",
"description": "Pluralized label for \"{count} posts\". Use as much plural forms (separated by \"|\") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)"
},
"theme.blog.tagTitle": {
"message": "{nPosts} tagged with \"{tagName}\"",
"description": "The title of the page for a blog tag"
},
"theme.tags.tagsPageLink": {
"message": "View All Tags",
"description": "The label of the link targeting the tag list page"
},
"theme.CodeBlock.copyButtonAriaLabel": {
"message": "Copy code to clipboard",
"description": "The ARIA label for copy code blocks button"
},
"theme.CodeBlock.copied": {
"message": "Copied",
"description": "The copied button label on code blocks"
},
"theme.CodeBlock.copy": {
"message": "Copy",
"description": "The copy button label on code blocks"
},
"theme.docs.sidebar.expandButtonTitle": {
"message": "Expand sidebar",
"description": "The ARIA label and title attribute for expand button of doc sidebar"
},
"theme.docs.sidebar.expandButtonAriaLabel": {
"message": "Expand sidebar",
"description": "The ARIA label and title attribute for expand button of doc sidebar"
},
"theme.docs.paginator.navAriaLabel": {
"message": "Docs pages navigation",
"description": "The ARIA label for the docs pagination"
},
"theme.docs.paginator.next": {
"message": "Next",
"description": "The label used to navigate to the next doc"
},
"theme.docs.paginator.previous": {
"message": "Previous",
"description": "The label used to navigate to the previous doc"
},
"theme.docs.sidebar.collapseButtonTitle": {
"message": "Collapse sidebar",
"description": "The title attribute for collapse button of doc sidebar"
},
"theme.docs.sidebar.collapseButtonAriaLabel": {
"message": "Collapse sidebar",
"description": "The title attribute for collapse button of doc sidebar"
},
"theme.DocSidebarItem.toggleCollapsedCategoryAriaLabel": {
"message": "Toggle the collapsible sidebar category '{label}'",
"description": "The ARIA label to toggle the collapsible sidebar category"
},
"theme.docs.tagDocListPageTitle.nDocsTagged": {
"message": "One doc tagged|{count} docs tagged",
"description": "Pluralized label for \"{count} docs tagged\". Use as much plural forms (separated by \"|\") as your language support (see https://www.unicode.org/cldr/cldr-aux/charts/34/supplemental/language_plural_rules.html)"
},
"theme.docs.tagDocListPageTitle": {
"message": "{nDocsTagged} with \"{tagName}\"",
"description": "The title of the page for a docs tag"
},
"theme.docs.versions.unreleasedVersionLabel": {
"message": "This is unreleased documentation for {siteTitle} {versionLabel} version.",
"description": "The label used to tell the user that he's browsing an unreleased doc version"
},
"theme.docs.versions.unmaintainedVersionLabel": {
"message": "This is documentation for {siteTitle} {versionLabel}, which is no longer actively maintained.",
"description": "The label used to tell the user that he's browsing an unmaintained doc version"
},
"theme.docs.versions.latestVersionSuggestionLabel": {
"message": "For up-to-date documentation, see the {latestVersionLink} ({versionLabel}).",
"description": "The label used to tell the user to check the latest version"
},
"theme.docs.versions.latestVersionLinkLabel": {
"message": "latest version",
"description": "The label used for the latest version suggestion link label"
},
"theme.common.editThisPage": {
"message": "Edit this page",
"description": "The link label to edit the current page"
},
"theme.common.headingLinkTitle": {
"message": "Direct link to heading",
"description": "Title for link to heading"
},
"theme.lastUpdated.atDate": {
"message": " on {date}",
"description": "The words used to describe on which date a page has been last updated"
},
"theme.lastUpdated.byUser": {
"message": " by {user}",
"description": "The words used to describe by who the page has been last updated"
},
"theme.lastUpdated.lastUpdatedAtBy": {
"message": "Last updated{atDate}{byUser}",
"description": "The sentence used to display when a page has been last updated, and by who"
},
"theme.navbar.mobileSidebarSecondaryMenu.backButtonLabel": {
"message": "← Back to main menu",
"description": "The label of the back button to return to main menu, inside the mobile navbar sidebar secondary menu (notably used to display the docs sidebar)"
},
"theme.navbar.mobileVersionsDropdown.label": {
"message": "Versions",
"description": "The label for the navbar versions dropdown on mobile view"
},
"theme.common.skipToMainContent": {
"message": "Skip to main content",
"description": "The skip to content label used for accessibility, allowing to rapidly navigate to main content with keyboard tab/enter navigation"
},
"theme.TOCCollapsible.toggleButtonLabel": {
"message": "On this page",
"description": "The label used by the button on the collapsible TOC component"
},
"theme.tags.tagsListLabel": {
"message": "Tags:",
"description": "The label alongside a tag list"
},
"theme.tags.tagsPageTitle": {
"message": "Tags",
"description": "The title of the tag list page"
},
"Welcome to my Docusaurus translated site!": {
"message": "Welcome to my Docusaurus translated site!",
"description": "The homepage main heading"
}
}

View File

@ -0,0 +1,12 @@
---
slug: first-blog-post
title: First Blog Post
authors:
name: Gao Wei
title: Docusaurus Core Team
url: https://github.com/wgao19
image_url: https://github.com/wgao19.png
tags: [hola, docusaurus]
---
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet

View File

@ -0,0 +1,44 @@
---
slug: long-blog-post
title: Long Blog Post
authors: endi
tags: [hello, docusaurus]
---
This is the summary of a very long blog post,
Use a `<!--` `truncate` `-->` comment to limit blog post size in the list view.
<!--truncate-->
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Pellentesque elementum dignissim ultricies. Fusce rhoncus ipsum tempor eros aliquam consequat. Lorem ipsum dolor sit amet

View File

@ -0,0 +1,20 @@
---
slug: mdx-blog-post
title: MDX Blog Post
authors: [slorber]
tags: [docusaurus]
---
Blog posts support [Docusaurus Markdown features](https://docusaurus.io/docs/markdown-features), such as [MDX](https://mdxjs.com/).
:::tip
Use the power of React to create interactive blog posts.
```js
<button onClick={() => alert('button clicked!')}>Click me!</button>
```
<button onClick={() => alert('button clicked!')}>Click me!</button>
:::

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

View File

@ -0,0 +1,25 @@
---
slug: welcome
title: Welcome
authors: [slorber, yangshun]
tags: [facebook, hello, docusaurus]
---
[Docusaurus blogging features](https://docusaurus.io/docs/blog) are powered by the [blog plugin](https://docusaurus.io/docs/api/plugins/@docusaurus/plugin-content-blog).
Simply add Markdown files (or folders) to the `blog` directory.
Regular blog authors can be added to `authors.yml`.
The blog post date can be extracted from filenames, such as:
- `2019-05-30-welcome.md`
- `2019-05-30-welcome/index.md`
A blog post folder can be convenient to co-locate blog post images:
![Docusaurus Plushie](./docusaurus-plushie-banner.jpeg)
The blog supports tags as well!
**And if you don't want a blog**: just delete this directory, and use `blog: false` in your Docusaurus config.

View File

@ -0,0 +1,17 @@
endi:
name: Endilie Yacop Sucipto
title: Maintainer of Docusaurus
url: https://github.com/endiliey
image_url: https://github.com/endiliey.png
yangshun:
name: Yangshun Tay
title: Front End Engineer @ Facebook
url: https://github.com/yangshun
image_url: https://github.com/yangshun.png
slorber:
name: Sébastien Lorber
title: Docusaurus maintainer
url: https://sebastienlorber.com
image_url: https://github.com/slorber.png

View File

@ -0,0 +1,14 @@
{
"title": {
"message": "Blog",
"description": "The title for the blog used in SEO"
},
"description": {
"message": "Blog",
"description": "The description for the blog used in SEO"
},
"sidebar.title": {
"message": "Recent posts",
"description": "The label for the left sidebar"
}
}

View File

@ -0,0 +1,30 @@
{
"version.label": {
"message": "Next",
"description": "The label for version current"
},
"sidebar.tutorialSidebar.category.简介": {
"message": "Introduction",
"description": "The label for category 简介 in sidebar tutorialSidebar"
},
"sidebar.tutorialSidebar.category.快速开始": {
"message": "Quick Start",
"description": "The label for category 快速开始 in sidebar tutorialSidebar"
},
"sidebar.tutorialSidebar.category.构建工具": {
"message": "Build Tool",
"description": "The label for category 构建工具 in sidebar tutorialSidebar"
},
"sidebar.tutorialSidebar.category.框架组件": {
"message": "Components",
"description": "The label for category 框架组件 in sidebar tutorialSidebar"
},
"sidebar.tutorialSidebar.category.其他组件": {
"message": "Other Components",
"description": "The label for category 其他组件 in sidebar tutorialSidebar"
},
"sidebar.tutorialSidebar.category.问题汇总": {
"message": "Problems",
"description": "The label for category 问题汇总 in sidebar tutorialSidebar"
}
}

View File

@ -0,0 +1,763 @@
---
sidebar_position: 2
---
# api syntax
## api example
```go
/**
* api syntax example and syntax description
*/
// api syntax version
syntax = "v1"
// import literal
import "foo.api"
// import group
import (
"bar.api"
"foo/bar.api"
)
info(
author: "songmeizi"
date: "2020-01-08"
desc: "api syntax example and syntax description"
)
// type literal
type Foo{
Foo int `json:"foo"`
}
// type group
type(
Bar{
Bar int `json:"bar"`
}
)
// service block
@server(
jwt: Auth
group: foo
)
service foo-api{
@doc "foo"
@handler foo
post /foo (Foo) returns (Bar)
}
```
## api syntax structure
* syntax syntax declaration
* import syntax block
* info syntax block
* type syntax block
* service syntax block
* Hidden channels
:::tip
In the above syntax structure, each syntax block can be declared anywhere in the .api file, syntactically speaking, according to the syntax block as a unit.
However, to improve reading efficiency, we recommend declaring them in the above order, as the order of syntax blocks may be controlled by strict mode in the future.
:::
### syntax syntax declaration
`syntax` is a newly added syntax construct that was introduced to address.
* Quickly locating problematic syntax constructs against api versions
* Parsing syntax for versions
* Preventing api syntax from being forward compatible due to major version upgrades
:::caution
The api being imported must match the syntax version of the main api.
:::
**Syntax Definition**
```antlrv4
'syntax'={checkVersion(p)}STRING
```
**Syntax Description**
syntax: fixed token that marks the beginning of a syntax structure
checkVersion: custom go method to check if `STRING` is a legal version number, the current detection logic is that STRING must be satisfying `(?m) "v[1-9][0-9]*"` regular.
STRING: a string wrapped in English double quotes, such as "v1"
An api syntax file can only have 0 or 1 syntax declaration, if there is no syntax, then the default is the v1 version
**Examples of correct syntax** ✅
eg1Unstandardized writing method
```api
syntax="v1"
```
eg2: normative writing (recommended)
```api
syntax = "v2"
```
**Examples of incorrect syntax** ❌
eg1
```api
syntax = "v0"
```
eg2
```api
syntax = v1
```
eg3
```api
syntax = "V1"
```
## import syntax block
As business size increases, more and more structures and services are defined in the api, and all the syntax descriptions are in one api file, which is such a bad problem that it will greatly increase the reading difficulty and maintenance difficulty. import syntax block can help us solve this problem by splitting the api file.
By splitting api files, different api files are declared according to certain rules, which can reduce the difficulty of reading and maintaining.
:::caution
Here import does not contain package declarations like golang, it is just an introduction of file paths, and eventually parsing will bring all declarations together into a single spec.
You can't import more than one of the same path, otherwise it will be parsed incorrectly.
:::
**Syntax Definition**
```antlrv4
'import' {checkImportValue(p)}STRING
|'import' '(' ({checkImportValue(p)}STRING)+ ')'
```
**Syntax Description**
import: fixed token, marking the beginning of an import syntax
checkImportValue: custom go method to check if `STRING` is a legal file path, the current detection logic is that STRING must be satisfying `(?m)"(/? [a-zA-Z0-9_#-])+\.api"` canonical.
STRING: a string wrapped in English double quotes, e.g. "foo.api"
**Examples of correct syntax** ✅
eg
```api
import "foo.api"
import "foo/bar.api"
import(
"bar.api"
"foo/bar/foo.api"
)
```
**Examples of incorrect syntax** ❌
eg
```api
import foo.api
import "foo.txt"
import (
bar.api
bar.api
)
```
## info syntax block
The info syntax block is a syntax body containing multiple key-value pairs, which is equivalent to the description of an api service, and is mapped by the parser to the spec.
Spec for the meta elements that need to be carried when translating to other languages (golang, java, etc.). If it's just a description of the current api, without considering its translation to other languages, a simple multi-line comment or a java-style documentation comment is sufficient; see **Hidden Passages** below for comment descriptions.
:::caution
Cannot use duplicate keys, only 0 or 1 info syntax block per api file
:::
**Syntax Definition**
```antlrv4
'info' '(' (ID {checkKeyValue(p)}VALUE)+ ')'
```
**Syntax Description**
info: fixed token, marking the beginning of an info syntax block
checkKeyValue: custom go method, check if `VALUE` is a legal value.
VALUE: the value corresponding to the key, can be a single line except '\r', '\n', '/' after any character, multiple lines please wrap with "", but it is strongly recommended that all are wrapped with ""
**Examples of correct syntax** ✅
eg1Unstandardized writing method
```api
info(
foo: foo value
bar:"bar value"
desc:"long long long long
long long text"
)
```
eg2: normative writing (recommended)
```api
info(
foo: "foo value"
bar: "bar value"
desc: "long long long long long long text"
)
```
**Examples of incorrect syntax** ❌
eg1No key-value content
```api
info()
```
eg2Does not contain a colon
```api
info(
foo value
)
```
eg3key-value without newline
```api
info(foo:"value")
```
eg4No key
```api
info(
: "value"
)
```
eg5Illegal key
```api
info(
12: "value"
)
```
eg6Remove old version multi-line syntax
```api
info(
foo: >
some text
<
)
```
## type syntax block
In api service, we need to use a structure (class) as a request body, response body carrier, so we need to declare some structure to do this thing, type syntax block evolved from golang type, of course, also retains some golang type characteristics, along with golang characteristics are.
* preserves the golang built-in data types `bool`,`int`,`int8`,`int16`,`int32`,`int64`,`uint`,`uint8`,`uint16`,`uint32`,`uint64`,`uintptr`
,`float32`,`float64`,`complex64`,`complex128`,`string`,`byte`,`rune`,
* Compatible with golang struct style declarations
* Retain golang keywords
:::caution
* alias is not supported
* The time.Time data type is not supported
* Structure names, field names, and cannot be golang keywords
:::
**Syntax Definition**
Since it is similar to golang, it will not be described in detail. Please see the specific syntax definition in [ApiParser.g4](https://github.com/zeromicro/go-zero/blob/master/tools/goctl/api/parser/g4/ApiParser. g4) to see the typeSpec definition.
**Syntax description**
Refer to golang writing style
**Correct syntax example** ✅
eg1: not written in the correct way
```api
type Foo struct{
Id int `path:"id"` // ①
Foo int `json:"foo"`
}
type Bar struct{
// Non-exportable fields
bar int `form:"bar"`
}
type(
// Non-exportable Structs
fooBar struct{
FooBar int `json:"fooBar"`
}
)
```
eg2: normative writing (recommended)
```api
type Foo{
Id int `path:"id"`
Foo int `json:"foo"`
}
type Bar{
Bar int `form:"bar"`
}
type(
FooBar{
FooBar int `json:"fooBar"`
}
)
```
**Examples of incorrect syntax** ❌
eg
```api
type Gender int // Not supported
// non struct token
type Foo structure{
CreateTime time.Time // Time is not supported and tag is not declared
}
// golang keyword var
type var{}
type Foo{
// golang keyword interface
Foo interface // No statement tag
}
type Foo{
foo int
// map key must be a golang built-in data type with no tag declared
m map[Bar]string
}
```
:::tip
tag definition is the same as json tag syntax in golang. In addition to json tag, go-zero also provides some other tags to implement the description of the fields.
See the following table for details.
See the table below for details.
:::
* tag table
<table>
<tr>
<td>tag key</td> <td>Description</td> <td>Provider</td><td>Valid range </td> <td>Example </td>
</tr>
<tr>
<td>json</td> <td>json serialization tag</td> <td>golang</td> <td>request, response</td> <td><code>json:"fooo"</ code></td>
</tr>
<tr>
<td>path</td> <td>Routing path, such as <code>/foo/:id</code></td> <td>go-zero</td> <td>request</td> < td><code>path:"id"</code></td>
</tr>
<tr>
<td>form</td> <td>Identifies that the request body is a form (in the POST method) or a query (in the GET method <code>/search?name=keyword</code>)</td> <td> go-zero</td> <td>request</td> <td><code>form:"name"</code></td>
</tr>
<tr>
<td>header</td> <td>HTTP header, such as <code>Name: value</code></td> <td>go-zero</td> <td>request</td> <td> <code>header:"name"</code></td>
</tr>
</table>
* tag modifier
Common parameter verification description
<table>
<tr>
<td>tag key </td> <td>Description </td> <td>Provider </td> <td>Valid range </td> <td>Example </td>
</tr>
<tr>
<td>optional</td> <td>Define the current field as an optional parameter</td> <td>go-zero</td> <td>request</td> <td><code>json:"name ,optional"</code></td>
</tr>
<tr>
<td>options</td> <td>Define the enumeration value of the current field, multiple are separated by a vertical bar|</td> <td>go-zero</td> <td>request</td> < td><code>json:"gender,options=male"</code></td>
</tr>
<tr>
<td>default</td> <td>Define the default value of the current field</td> <td>go-zero</td> <td>request</td> <td><code>json:"gender,default =male"</code></td>
</tr>
<tr>
<td>range</td> <td>Define the value range of the current field</td> <td>go-zero</td> <td>request</td> <td><code>json:"age,range =[0:120]"</code></td>
</tr>
</table>
:::tip
The tag modifier needs to be separated by a quoted comma after the tag value
:::
## service syntax block
service syntax block is used to define api services, including service name, service metadata, middleware declaration, routes, handlers, etc.
:::caution
* The names of the main api and the api service being imported must be the same, and there must be no service name ambiguity.
* handler names must not be repeated
* route (request method + request path) names must not be duplicated
* The request body must be declared as a normal (non-pointer) struct, the response body has some forward-compatible processing, see below for details
:::
**Syntax Definition**
```antlrv4
serviceSpec: atServer? serviceApi;
atServer: '@server' lp='(' kvLit+ rp=')';
serviceApi: {match(p,"service")}serviceToken=ID serviceName lbrace='{' serviceRoute* rbrace='}';
serviceRoute: atDoc? (atServer|atHandler) route;
atDoc: '@doc' lp='('? ((kvLit+)|STRING) rp=')'?;
atHandler: '@handler' ID;
route: {checkHttpMethod(p)}httpMethod=ID path request=body? returnToken=ID? response=replybody?;
body: lp='(' (ID)? rp=')';
replybody: lp='(' dataType? rp=')';
// kv
kvLit: key=ID {checkKeyValue(p)}value=LINE_VALUE;
serviceName: (ID '-'?)+;
path: (('/' (ID ('-' ID)*))|('/:' (ID ('-' ID)?)))+;
```
**Syntax Description**
serviceSpec: contains an optional syntax block `atServer` and `serviceApi` syntax block, which follows the sequence pattern (writing service must follow the sequence, otherwise it will be parsed with errors)
atServer: optional syntax block, defining the server metadata of key-value structure, '@server'
It can be used to describe the serviceApi or route syntax block, and there are some special key keys that need to be noted when it is used to describe different syntax blocks, see **atServer key key description**.
serviceApi: contains 1 to multiple `serviceRoute` syntax blocks
serviceRoute: contains `atDoc`, handler and `route` according to the sequence pattern
Spec structure after parsing, if you don't care to pass it to spec.
handler: is the handler level description of the route, you can specify the handler name by specifying the `handler` key via atServer, or you can define the handler name directly using the atHandler syntax block
atHandler: '@handler' fixed token followed by a value that follows the regular `[_a-zA-Z][a-zA-Z_-]*`), used to declare a handler name
route: route, has `httpMethod`, `path`, optional `request`, optional `response`, `httpMethod` is must be lowercase.
body: api request body syntax definition, must be wrapped by the () optional ID value
replyBody: api response body syntax definition, must be wrapped by () struct, ~~array (forward-compatible processing, subsequent may be deprecated, highly recommended to struct wrapped, do not directly use array as the response body) ~~
kvLit: same as info key-value
serviceName: ID value that can have multiple '-' joins
path: api request path, must start with '/' or '/:', not end with '/', the middle can contain ID or multiple '-' join the ID string
**atServer Key Key Description Description**
When modifying service
<table>
<tr>
<td>key</td><td>Description</td><td>Example</td>
</tr>
<tr>
<td>jwt</td><td>Declare that all routes under the current service require jwt authentication, and will automatically generate code containing jwt logic</td><td><code>jwt: Auth</code></td><td><code>jwt: Auth</code></ td>
</tr>
<tr>
<td>group</td><td>Declare the current service or routing file group</td><td><code>group: login</code></td>
</tr>
<tr>
<td>middleware</td><td>Declare that the current service needs to enable middleware</td><td><code>middleware: AuthMiddleware</code></td>
</tr>
<tr>
<td>prefix</td><td>Add routing group</td><td><code>prefix: /api</code></td>
</tr>
</table>
When modifying the route
<table>
<tr>
<td>key</td><td>Description</td><td>Example</td>
</tr>
<tr>
<td>handler</td><td>Declare a handler</td><td>-</td>
</tr>
</table>
**Example of correct syntax** ✅
eg1Unstandardized writing method
```api
@server(
jwt: Auth
group: foo
middleware: AuthMiddleware
prefix /api
)
service foo-api{
@doc(
summary: foo
)
@server(
handler: foo
)
// Non-exportable body
post /foo/:id (foo) returns (bar)
@doc "bar"
@handler bar
post /bar returns ([]int)// Arrays are not recommended as response bodies
@handler fooBar
post /foo/bar (Foo) returns // 'returns' can be omitted
}
```
eg2: normative writing (recommended)
```api
@server(
jwt: Auth
group: foo
middleware: AuthMiddleware
prefix: /api
)
service foo-api{
@doc "foo"
@handler foo
post /foo/:id (Foo) returns (Bar)
}
service foo-api{
@handler ping
get /ping
@doc "foo"
@handler bar
post /bar/:id (Foo)
}
```
**Examples of incorrect syntax** ❌
```api
// Empty server syntax blocks are not supported
@server(
)
// 不支持空的service语法块
service foo-api{
}
service foo-api{
@doc kkkk // The short version doc must be caused by double quotation marks in English
@handler foo
post /foo
@handler foo // Repeated handlers
post /bar
@handler fooBar
post /bar // Duplicate Routing
// @handler and @doc are in the wrong order
@handler someHandler
@doc "some doc"
post /some/path
// handler missing
post /some/path/:id
@handler reqTest
post /foo/req (*Foo) // Data types other than normal structures are not supported as request bodies
@handler replyTest
post /foo/reply returns (*Foo) // Do not support data types other than ordinary structures, arrays (forward compatible, subsequently considered deprecated) as response bodies
}
```
## Hidden Channels
We will only talk about comments here, because blank and newline symbols are useless at the moment.
### Single line comments
**Syntax definition**
```antlrv4
'//' ~[\r\n]*
```
**Syntax description**
As you know from the syntax definition, a single line comment must start with `//` and the content must not contain a line break
**Correct syntax example** ✅
```api
// doc
// comment
```
**Examples of incorrect syntax** ❌
```api
// break
line comments
```
### java style documentation comments
**Syntax Definition**
```antlrv4
'/*' .*? '*/'
```
**Syntax description**
As you know from the syntax definition, a single line comment must start with `/*` and end with `*/` in any character.
**Example of correct syntax** ✅
```api
/**
* java-style doc
*/
```
**Examples of incorrect syntax** ❌
```api
/*
* java-style doc */
*/
```
## Doc&Comment
We specify that all comments (single line, or multiple lines) from line+1 of the previous syntax block (non-hidden channel content) to the first element of the current syntax block are doc, and retain the `//`, `/*`, `*/` original tokens.
**Comment**
We specify that a comment block (on the same line, or on multiple lines) starting from the line where the last element of the current syntax block is located is a comment and retains the `//`, `/*`, `*/` primitive tokens.
Support for syntax blocks Doc and Comment
<table>
<tr>
<td>Grammar blocks</td><td>parent syntax block</td><td>Doc</td><td>Comment</td>
</tr>
<tr>
<td>syntaxLit</td><td>api</td><td></td><td></td>
</tr>
<tr>
<td>kvLit</td><td>infoSpec</td><td></td><td></td>
</tr>
<tr>
<td>importLit</td><td>importSpec</td><td></td><td></td>
</tr>
<tr>
<td>typeLit</td><td>api</td><td></td><td></td>
</tr>
<tr>
<td>typeLit</td><td>typeBlock</td><td></td><td></td>
</tr>
<tr>
<td>field</td><td>typeLit</td><td></td><td></td>
</tr>
<tr>
<td>key-value</td><td>atServer</td><td></td><td></td>
</tr>
<tr>
<td>atHandler</td><td>serviceRoute</td><td></td><td></td>
</tr>
<tr>
<td>route</td><td>serviceRoute</td><td></td><td></td>
</tr>
</table>
The following is the corresponding syntax block parsed with doc and comment writing
```api
// syntaxLit doc
syntax = "v1" // syntaxLit commnet
info(
// kvLit doc
author: songmeizi // kvLit comment
)
// typeLit doc
type Foo {}
type(
// typeLit doc
Bar{}
FooBar{
// filed doc
Name int // filed comment
}
)
@server(
/**
* kvLit doc
* Enabling jwt forensics
*/
jwt: Auth /**kvLit comment*/
)
service foo-api{
// atHandler doc
@handler foo //atHandler comment
/*
* route doc
* post request
* path /foo
* Request BodyFoo
* Response BodyFoo
*/
post /foo (Foo) returns (Foo) // route comment
}
```

View File

@ -0,0 +1,73 @@
---
sidebar_position: 2
---
# Build API
If you just start a `go-zero` `api` demo project, you can develop an api service and run it normally without even coding. In a traditional api project, we have to create all levels of directories, write structures, define routes, add `logic` files, and this series of operations, if we calculate the business requirements of a protocol, it takes about 5-6 minutes to code the whole thing before we can really get into writing the business logic, and this does not take into account the various errors that may occur during the writing process, and as the number of services, as the number of protocols This part of the preparation time will rise proportionally as the number of services and protocols increases, but the `goctl api` can completely replace you to do this part of the work, no matter how many protocols you have to set, in the end, it will take less than 10 seconds to complete.
:::tip
where the structs are written and the route definitions are replaced with api, so all in all, what is saved is your time in the process of creating folders, adding various files and resource dependencies.
:::
### api command description
```shell
$ goctl api -h
```
```text
NAME:
goctl api - generate api related files
USAGE:
goctl api command [command options] [arguments...]
COMMANDS:
new fast create api service
format format api files
validate validate api file
doc generate doc files
go generate go files for provided api in yaml file
java generate java files for provided api in api file
ts generate ts files for provided api in api file
dart generate dart files for provided api in api file
kt generate kotlin code for provided api file
plugin custom file generator
OPTIONS:
-o value the output api file
--help, -h show help
```
As you can see from the above, depending on the function, the api contains a lot of self-commands and flags, which we focus on here
`go` subcommand, its function is to generate golang api service, we look at the usage help by `goctl api go -h`:
```shell
$ goctl api go -h
```
```text
NAME:
goctl api go - generate go files for provided api in yaml file
USAGE:
goctl api go [command options] [arguments...]
OPTIONS:
--dir value the target dir
--api value the api file
--style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
```
* --dir code output directory
* --api Specify the api source file
* --style Specify the file name style of the generated code file, see [file name naming style description](https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md) for details
### Usage examples
```shell
$ goctl api go -api user.api -dir . -style gozero
```

View File

@ -0,0 +1,386 @@
---
sidebar_position: 4
---
# Build Model
`goctl model` is one of the components of the tools module under `go-zero`, which currently supports recognizing `mysql ddl` for `model` layer code generation, and can optionally generate code logic with or without `redis cache` via the command line or the `idea` plugin (to be supported soon).
## Quick start
* Generated via ddl
```shell
$ goctl model mysql ddl -src="./*.sql" -dir="./sql/model" -c
```
The CURD code can be generated quickly after executing the above command.
```text
model
│   ├── error.go
│   └── usermodel.go
```
* Generated via datasource
```shell
$ goctl model mysql datasource -url="user:password@tcp(127.0.0.1:3306)/database" -table="*" -dir="./model"
```
* Example of generating code
```go
package model
import (
"database/sql"
"fmt"
"strings"
"time"
"github.com/tal-tech/go-zero/core/stores/cache"
"github.com/tal-tech/go-zero/core/stores/sqlc"
"github.com/tal-tech/go-zero/core/stores/sqlx"
"github.com/tal-tech/go-zero/core/stringx"
"github.com/tal-tech/go-zero/tools/goctl/model/sql/builderx"
)
var (
userFieldNames = builderx.RawFieldNames(&User{})
userRows = strings.Join(userFieldNames, ",")
userRowsExpectAutoSet = strings.Join(stringx.Remove(userFieldNames, "`id`", "`create_time`", "`update_time`"), ",")
userRowsWithPlaceHolder = strings.Join(stringx.Remove(userFieldNames, "`id`", "`create_time`", "`update_time`"), "=?,") + "=?"
cacheUserNamePrefix = "cache#User#name#"
cacheUserMobilePrefix = "cache#User#mobile#"
cacheUserIdPrefix = "cache#User#id#"
cacheUserPrefix = "cache#User#user#"
)
type (
UserModel interface {
Insert(data User) (sql.Result, error)
FindOne(id int64) (*User, error)
FindOneByUser(user string) (*User, error)
FindOneByName(name string) (*User, error)
FindOneByMobile(mobile string) (*User, error)
Update(data User) error
Delete(id int64) error
}
defaultUserModel struct {
sqlc.CachedConn
table string
}
User struct {
Id int64 `db:"id"`
User string `db:"user"` // 用户
Name string `db:"name"` // 用户名称
Password string `db:"password"` // 用户密码
Mobile string `db:"mobile"` // 手机号
Gender string `db:"gender"` // 男|女|未公开
Nickname string `db:"nickname"` // 用户昵称
CreateTime time.Time `db:"create_time"`
UpdateTime time.Time `db:"update_time"`
}
)
func NewUserModel(conn sqlx.SqlConn, c cache.CacheConf) UserModel {
return &defaultUserModel{
CachedConn: sqlc.NewConn(conn, c),
table: "`user`",
}
}
func (m *defaultUserModel) Insert(data User) (sql.Result, error) {
userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, data.Name)
userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, data.Mobile)
userKey := fmt.Sprintf("%s%v", cacheUserPrefix, data.User)
ret, err := m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
query := fmt.Sprintf("insert into %s (%s) values (?, ?, ?, ?, ?, ?)", m.table, userRowsExpectAutoSet)
return conn.Exec(query, data.User, data.Name, data.Password, data.Mobile, data.Gender, data.Nickname)
}, userNameKey, userMobileKey, userKey)
return ret, err
}
func (m *defaultUserModel) FindOne(id int64) (*User, error) {
userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, id)
var resp User
err := m.QueryRow(&resp, userIdKey, func(conn sqlx.SqlConn, v interface{}) error {
query := fmt.Sprintf("select %s from %s where `id` = ? limit 1", userRows, m.table)
return conn.QueryRow(v, query, id)
})
switch err {
case nil:
return &resp, nil
case sqlc.ErrNotFound:
return nil, ErrNotFound
default:
return nil, err
}
}
func (m *defaultUserModel) FindOneByUser(user string) (*User, error) {
userKey := fmt.Sprintf("%s%v", cacheUserPrefix, user)
var resp User
err := m.QueryRowIndex(&resp, userKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
query := fmt.Sprintf("select %s from %s where `user` = ? limit 1", userRows, m.table)
if err := conn.QueryRow(&resp, query, user); err != nil {
return nil, err
}
return resp.Id, nil
}, m.queryPrimary)
switch err {
case nil:
return &resp, nil
case sqlc.ErrNotFound:
return nil, ErrNotFound
default:
return nil, err
}
}
func (m *defaultUserModel) FindOneByName(name string) (*User, error) {
userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, name)
var resp User
err := m.QueryRowIndex(&resp, userNameKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
query := fmt.Sprintf("select %s from %s where `name` = ? limit 1", userRows, m.table)
if err := conn.QueryRow(&resp, query, name); err != nil {
return nil, err
}
return resp.Id, nil
}, m.queryPrimary)
switch err {
case nil:
return &resp, nil
case sqlc.ErrNotFound:
return nil, ErrNotFound
default:
return nil, err
}
}
func (m *defaultUserModel) FindOneByMobile(mobile string) (*User, error) {
userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, mobile)
var resp User
err := m.QueryRowIndex(&resp, userMobileKey, m.formatPrimary, func(conn sqlx.SqlConn, v interface{}) (i interface{}, e error) {
query := fmt.Sprintf("select %s from %s where `mobile` = ? limit 1", userRows, m.table)
if err := conn.QueryRow(&resp, query, mobile); err != nil {
return nil, err
}
return resp.Id, nil
}, m.queryPrimary)
switch err {
case nil:
return &resp, nil
case sqlc.ErrNotFound:
return nil, ErrNotFound
default:
return nil, err
}
}
func (m *defaultUserModel) Update(data User) error {
userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, data.Id)
_, err := m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
query := fmt.Sprintf("update %s set %s where `id` = ?", m.table, userRowsWithPlaceHolder)
return conn.Exec(query, data.User, data.Name, data.Password, data.Mobile, data.Gender, data.Nickname, data.Id)
}, userIdKey)
return err
}
func (m *defaultUserModel) Delete(id int64) error {
data, err := m.FindOne(id)
if err != nil {
return err
}
userNameKey := fmt.Sprintf("%s%v", cacheUserNamePrefix, data.Name)
userMobileKey := fmt.Sprintf("%s%v", cacheUserMobilePrefix, data.Mobile)
userIdKey := fmt.Sprintf("%s%v", cacheUserIdPrefix, id)
userKey := fmt.Sprintf("%s%v", cacheUserPrefix, data.User)
_, err = m.Exec(func(conn sqlx.SqlConn) (result sql.Result, err error) {
query := fmt.Sprintf("delete from %s where `id` = ?", m.table)
return conn.Exec(query, id)
}, userNameKey, userMobileKey, userIdKey, userKey)
return err
}
func (m *defaultUserModel) formatPrimary(primary interface{}) string {
return fmt.Sprintf("%s%v", cacheUserIdPrefix, primary)
}
func (m *defaultUserModel) queryPrimary(conn sqlx.SqlConn, v, primary interface{}) error {
query := fmt.Sprintf("select %s from %s where `id` = ? limit 1", userRows, m.table)
return conn.QueryRow(v, query, primary)
}
```
## Usage
```text
$ goctl model mysql -h
```
```text
NAME:
goctl model mysql - generate mysql model"
USAGE:
goctl model mysql command [command options] [arguments...]
COMMANDS:
ddl generate mysql model from ddl"
datasource generate model from datasource"
OPTIONS:
--help, -h show help
```
## Generate rules
* Default Rules
By default users create createTime, updateTime fields (ignore case, underscore naming style) and the default value is `CURRENT_TIMESTAMP`, while updateTime supports `ON UPDATE CURRENT_TIMESTAMP`, for these two fields generate `insert`, `update` will be removed from the assignment, of course, if you do not need these two fields then it does not matter.
* With cache mode
* ddl
```shell
$ goctl model mysql -src={patterns} -dir={dir} -cache
```
help
```
NAME:
goctl model mysql ddl - generate mysql model from ddl
USAGE:
goctl model mysql ddl [command options] [arguments...]
OPTIONS:
--src value, -s value the path or path globbing patterns of the ddl
--dir value, -d value the target dir
--style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
--cache, -c generate code with cache [optional]
--idea for idea plugin [optional]
```
* datasource
```shell
$ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir} -cache=true
```
help
```text
NAME:
goctl model mysql datasource - generate model from datasource
USAGE:
goctl model mysql datasource [command options] [arguments...]
OPTIONS:
--url value the data source of database,like "root:password@tcp(127.0.0.1:3306)/database
--table value, -t value the table or table globbing patterns in the database
--cache, -c generate code with cache [optional]
--dir value, -d value the target dir
--style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
--idea for idea plugin [optional]
```
:::tip
goctl model mysql ddl/datasource both have a new `--style` parameter to mark the file naming style.
:::
Currently only support redis cache, if you choose to bring cache mode, that is, the generated `FindOne(ByXxx)` & `Delete` code will generate code with cache logic, currently only support single index fields (in addition to full-text indexes), for the joint index we do not think by default need to bring cache, and does not belong to the general code, so not put in the code generation ranks, such as example in the user table `id`, `name`, `mobile` fields belong to the single field index.
* Without cache mode
* ddl
```shell
$ goctl model -src={patterns} -dir={dir}
```
* datasource
```shell
$ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir}
```
or
* ddl
```shell
$ goctl model -src={patterns} -dir={dir}
```
* datasource
```shell
$ goctl model mysql datasource -url={datasource} -table={patterns} -dir={dir}
```
Generate code with only basic CURD structure.
## Cache
For the cache piece I chose to list it in the form of a question and answer. I think this will give a clearer description of the functions of the cache in the mod.
* What information does the cache cache?
For primary key field caching, the entire structure information is cached, while for single index fields (except full-text indexes) the primary key field values are cached.
* Will the cache be cleared if the data is updated (`update`)?
will, but only clear the primary key cache information, WHY?
* Why not generate code for `updateByXxx` and `deleteByXxx` as per single index field?
Theoretically there is no problem, but we believe that the data operations for the model layer are all in the whole structure, including the query, I do not recommend querying only a certain part of the fields (no objection), otherwise our cache will be meaningless.
* Why not support `findPageLimit`, `findAll` so mode code generation layer?
Currently, I think all the code except the basic CURD is <i>business-type</i> code, which I think is better for developers to write according to business needs.
# Type conversion rules
| mysql dataType | golang dataType | golang dataType(if null&&default null) |
|----------------|-----------------|----------------------------------------|
| bool | int64 | sql.NullInt64 |
| boolean | int64 | sql.NullInt64 |
| tinyint | int64 | sql.NullInt64 |
| smallint | int64 | sql.NullInt64 |
| mediumint | int64 | sql.NullInt64 |
| int | int64 | sql.NullInt64 |
| integer | int64 | sql.NullInt64 |
| bigint | int64 | sql.NullInt64 |
| float | float64 | sql.NullFloat64 |
| double | float64 | sql.NullFloat64 |
| decimal | float64 | sql.NullFloat64 |
| date | time.Time | sql.NullTime |
| datetime | time.Time | sql.NullTime |
| timestamp | time.Time | sql.NullTime |
| time | string | sql.NullString |
| year | time.Time | sql.NullInt64 |
| char | string | sql.NullString |
| varchar | string | sql.NullString |
| binary | string | sql.NullString |
| varbinary | string | sql.NullString |
| tinytext | string | sql.NullString |
| text | string | sql.NullString |
| mediumtext | string | sql.NullString |
| longtext | string | sql.NullString |
| enum | string | sql.NullString |
| set | string | sql.NullString |
| json | string | sql.NullString |

View File

@ -0,0 +1,5 @@
---
sidebar_position: 7
---
# Other

View File

@ -0,0 +1,67 @@
---
sidebar_position: 5
---
# Plugin
goctl supports custom plugins for api, so how do I customize a plugin? Take a look at an example of how to use it in the end below.
```go
$ goctl api plugin -p goctl-android="android -package com.tal" -api user.api -dir .
```
The above command can be broken down into the following steps.
* goctl parses the api file
* goctl passes the parsed structure ApiSpec and parameters to the goctl-android executable
* goctl-android generates custom logic based on the ApiSpec structure.
The first part of this command goctl api plugin -p is a fixed parameter, goctl-android="android -package com.tal" is the plugin parameter, where goctl-android is the plugin binary, android -package com.tal is the custom parameter of the plugin parameter, -api user.api -dir . is the goctl generic custom parameter.
## How to write custom plug-ins?
The go-zero framework includes a very simple custom plugin demo with the following code.
```go title="plugin.go"
package main
import (
"fmt"
"github.com/tal-tech/go-zero/tools/goctl/plugin"
)
func main() {
plugin, err := plugin.NewPlugin()
if err != nil {
panic(err)
}
if plugin.Api != nil {
fmt.Printf("api: %+v \n", plugin.Api)
}
fmt.Printf("dir: %s \n", plugin.Dir)
fmt.Println("Enjoy anything you want.")
}
```
`plugin, err := plugin.NewPlugin()` This line of code serves to parse the data passed from goctl, which contains the following parts.
```go
type Plugin struct {
Api *spec.ApiSpec
Style string
Dir string
}
```
:::tip
Api: defines the structure data of the api file
Style: optional parameter that can be used to control the file naming convention
Dir: working directory
:::
The complete plugin-based implementation of the android plugin demo project
[https://github.com/zeromicro/goctl-android](https://github.com/zeromicro/goctl-android)

View File

@ -0,0 +1,234 @@
---
sidebar_position: 3
---
# Build RPC
`goctl rpc` is an rpc service code generation module under `goctl` scaffolding, supporting `proto` template generation and `rpc` service code generation, through this tool to generate code you only need to focus on business logic writing without having to write some repetitive code. This allows us to focus on the business, thus speeding up the development efficiency and reducing the error rate of the code.
## Features
* Easy to use
* Fast development efficiency
* Low error rate
* Close to protoc
## Quick start
### Way 1: Quickly generate greet services
Generated by the command `goctl rpc new ${servieName}`
If you generate the greet rpc service.
```Bash
goctl rpc new greet
```
The code structure after execution is as follows:
```go
.
├── etc
│ └── greet.yaml
├── go.mod
├── greet
│ └── greet.pb.go
├── greet.go
├── greet.proto
├── greetclient
│ └── greet.go
└── internal
├── config
│ └── config.go
├── logic
│ └── pinglogic.go
├── server
│ └── greetserver.go
└── svc
└── servicecontext.go
```
:::tip
pb folder name (old version of the folder fixed to pb) said to be taken from the value of option go_package in the proto file last level of conversion in a certain format, if there is no such declaration, then the value from package, roughly code as follows.
:::
```go title="google.golang.org/protobuf@v1.25.0/internal/strs/strings.go:71"
if option.Name == "go_package" {
ret.GoPackage = option.Constant.Source
}
...
if len(ret.GoPackage) == 0 {
ret.GoPackage = ret.Package.Name
}
ret.PbPackage = 'GoSanitized(filepath.Base(ret.GoPackage))'
...
```
:::tip
The name of the call layer folder is taken from the name of the service in the proto. If the name of this sercice is equal to the name of the pb folder, the client will be added after the srervice to distinguish it and make the pb and call separated.
:::
```go
if strings.ToLower(proto.Service.Name) == strings.ToLower(proto.GoPackage) {
callDir = filepath.Join(ctx.WorkDir, strings.ToLower(stringx.From(proto.Service.Name+"_client").ToCamel()))
}
```
### Way 2: Generate rpc service by specifying proto
* Generate proto template
```Bash
goctl rpc template -o=user.proto
```
```go title="user.proto"
syntax = "proto3";
package remote;
option go_package = "remote";
message Request {
// 用户名
string username = 1;
// 用户密码
string password = 2;
}
message Response {
// 用户名称
string name = 1;
// 用户性别
string gender = 2;
}
service User {
// 登录
rpc Login(Request)returns(Response);
}
```
* Generate rpc service code
```Bash
goctl rpc proto -src user.proto -dir .
```
## Preparation
* Go environment is installed
* protoc & protoc-gen-go are installed and environment variables are set
* For more questions, see <a href="#Notes">Notes</a>
## Usage
### rpc service generation usage
```Bash
goctl rpc proto -h
```
```Bash
NAME:
goctl rpc proto - generate rpc from proto
USAGE:
goctl rpc proto [command options] [arguments...]
OPTIONS:
--src value, -s value the file path of the proto source file
--proto_path value, -I value native command of protoc, specify the directory in which to search for imports. [optional]
--dir value, -d value the target path of the code
--style value the file naming format, see [https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md]
--idea whether the command execution environment is from idea plugin. [optional]
```
### Parameter description
* --src mandatory, proto data source, currently supports single proto file generation
* --proto_path Optional, a native subcommand of protoc, used to specify where to look for proto import, multiple paths can be specified, e.g. `goctl rpc -I={path1} -I={path2} ... ', which can be left out when there is no import. You don't need to specify the current proto path, it's already built-in, please refer to `protoc -h` for the detailed usage of `-I`.
* --dir optional, default is the directory where the proto file is located, the target directory of the generated code.
* --style optional, the output directory file naming style, see https://github.com/zeromicro/go-zero/tree/master/tools/goctl/config/readme.md for details.
* --idea optional, whether to execute in the idea plugin, terminal execution can be ignored
### What developers need to do
Focus on business code writing, leave the repetitive, non-business related work to goctl, after generating good rpc service code, developers only need to modify
* write configuration files in the service (etc/xx.json, internal/config/config.go)
* Business logic writing in the service (internal/logic/xxlogic.go)
* Writing of resource context in the service (internal/svc/servicecontext.go)
### Caution
* proto does not support simultaneous generation of multiple files at the moment.
* proto does not support external dependency package introduction, message does not support inline
* Currently main file, shared file, handler file will be forced to overwrite, and developers need to write manually will not overwrite the generation, this category in the code header are
``` shell
// Code generated by goctl. DO NOT EDIT!
// Source: xxx.proto
```
Please be careful not to write business-like code in it as well.
## proto import
* For requestType and returnType in rpc must be defined in main proto file, for message in proto can import other proto files like protoc.
### error import
```protobuf title="greet.proto"
syntax = "proto3";
package greet;
option go_package = "greet";
import "base/common.proto"
message Request {
string ping = 1;
}
message Response {
string pong = 1;
}
service Greet {
rpc Ping(base.In) returns(base.Out);// request和return 不支持import
}
```
### Correctly import
```protobuf title="greet.proto"
syntax = "proto3";
package greet;
option go_package = "greet";
import "base/common.proto"
message Request {
base.In in = 1;// 支持import
}
message Response {
base.Out out = 2;// 支持import
}
service Greet {
rpc Ping(Request) returns(Response);
}
```

View File

@ -0,0 +1,314 @@
---
sidebar_position: 6
---
# Template
## Template manipulation
Template is the basis of data-driven generation, all code (rest api, rpc, model, docker, kube) generation will rely on template.
By default, the template generator selects the in-memory template for generation, while for developers who need to modify the template, they need to drop the template and make template changes in the next code generation.
For developers who need to modify the templates, they need to modify the templates, and then the next time the code is generated, it will load the templates under the specified path to generate.
### Help for use
```text
NAME:
goctl template - template operation
USAGE:
goctl template command [command options] [arguments...]
COMMANDS:
init initialize the all templates(force update)
clean clean the all cache templates
update update template of the target category to the latest
revert revert the target template to the latest
OPTIONS:
--help, -h show help
```
### Template initialization
```text
NAME:
goctl template init - initialize the all templates(force update)
USAGE:
goctl template init [command options] [arguments...]
OPTIONS:
--home value the goctl home path of the template
```
### Clear template
```text
NAME:
goctl template clean - clean the all cache templates
USAGE:
goctl template clean [command options] [arguments...]
OPTIONS:
--home value the goctl home path of the template
```
### Roll back the specified category template
```text
NAME:
goctl template update - update template of the target category to the latest
USAGE:
goctl template update [command options] [arguments...]
OPTIONS:
--category value, -c value the category of template, enum [api,rpc,model,docker,kube]
--home value the goctl home path of the template
```
### Rollback template
```text
NAME:
goctl template revert - revert the target template to the latest
USAGE:
goctl template revert [command options] [arguments...]
OPTIONS:
--category value, -c value the category of template, enum [api,rpc,model,docker,kube]
--name value, -n value the target file name of template
--home value the goctl home path of the template
```
:::tip
`--home` Specify the template storage path
:::
### Template loading
You can specify the folder where the template is located by `-home` during code generation, the commands that have been supported to specify the template directory are:
- `goctl api go` Details can be found in `goctl api go --help` for help
- `goctl docker` Details can be viewed with `goctl docker --help`
- `goctl kube` Details can be viewed with `goctl kube --help`
- `goctl rpc new` Details can be viewed with `goctl rpc new --help`
- `goctl rpc proto` Details can be viewed with `goctl rpc proto --help`
- `goctl model mysql ddl` Details can be viewed with `goctl model mysql ddl --help`
- `goctl model mysql datasource` Details can be viewed with `goctl model mysql datasource --help`
- `goctl model postgresql datasource` Details can be viewed with `goctl model mysql datasource --help`
- `goctl model mongo` Details can be viewed with `goctl model mongo --help`
The default (when `-home` is not specified) is to read from the `$HOME/.goctl` directory.
### Usage examples
* Initialize the template to the specified `$HOME/template` directory
```text
$ goctl template init --home $HOME/template
```
```text
Templates are generated in /Users/anqiansong/template, edit on your risk!
```
* Greet rpc generation using `$HOME/template` template
```text
$ goctl rpc new greet --home $HOME/template
```
```text
Done
```
## Template modification
### Scenario
Implement a uniformly formatted body response in the following format:
```json
{
"code": 0,
"msg": "OK",
"data": {} // ①
}
```
① Actual response data
:::tip
The code generated by `go-zero` does not process it
:::
### Preparation
We go ahead and write a `Response` method in the `response` package under the project whose `module` is `greet`, with a directory tree similar to the following.
```text
greet
├── response
│   └── response.go
└── xxx...
```
The code is as follows:
```go
package response
import (
"net/http"
"github.com/tal-tech/go-zero/rest/httpx"
)
type Body struct {
Code int `json:"code"`
Msg string `json:"msg"`
Data interface{} `json:"data,omitempty"`
}
func Response(w http.ResponseWriter, resp interface{}, err error) {
var body Body
if err != nil {
body.Code = -1
body.Msg = err.Error()
} else {
body.Msg = "OK"
body.Data = resp
}
httpx.OkJson(w, body)
}
```
### Modify the handler template
```shell
$ vim ~/.goctl/api/handler.tpl
```
Replace the template with the following
```go
package handler
import (
"net/http"
"greet/response"// ①
{% raw %}
{{.ImportPackages}}
{% endraw %}
)
{% raw %}
func {{.HandlerName}}(ctx *svc.ServiceContext) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
{{if .HasRequest}}var req types.{{.RequestType}}
if err := httpx.Parse(r, &req); err != nil {
httpx.Error(w, err)
return
}{{end}}
l := logic.New{{.LogicType}}(r.Context(), ctx)
{{if .HasResp}}resp, {{end}}err := l.{{.Call}}({{if .HasRequest}}req{{end}})
{{if .HasResp}}response.Response(w, resp, err){{else}}response.Response(w, nil, err){{end}}//②
}
}
{% endraw %}
```
① Replace with your real `response` package name, for reference only
② Customize the template content
:::tip
If you don't have a `~/.goctl/api/handler.tpl` file locally, you can initialize it with the template initialization command `goctl template init`
:::
### Comparison before and after modifying the template
* Before modification
```go
func GreetHandler(ctx *svc.ServiceContext) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var req types.Request
if err := httpx.Parse(r, &req); err != nil {
httpx.Error(w, err)
return
}
l := logic.NewGreetLogic(r.Context(), ctx)
resp, err := l.Greet(req)
// 以下内容将被自定义模板替换
if err != nil {
httpx.Error(w, err)
} else {
httpx.OkJson(w, resp)
}
}
}
```
* After modification
```go
func GreetHandler(ctx *svc.ServiceContext) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
var req types.Request
if err := httpx.Parse(r, &req); err != nil {
httpx.Error(w, err)
return
}
l := logic.NewGreetLogic(r.Context(), ctx)
resp, err := l.Greet(req)
response.Response(w, resp, err)
}
}
```
### Comparison of response body before and after template modification
* Before modification
```json
{
"message": "Hello go-zero!"
}
```
* After modification
```json
{
"code": 0,
"msg": "OK",
"data": {
"message": "Hello go-zero!"
}
}
```
## Summary
This document only describes the process of customizing the template for the corresponding example of http, in addition to the following scenarios of customizing the template.
* model layer adds kmq
* model layer to generate the model instance of the option to be valid
* http customize the corresponding format

View File

@ -0,0 +1,82 @@
---
sidebar_position: 1
---
# Introduction
`goctl` is pronounced as `go control`, not as `go C-T-L`. `goctl` means not to be controlled by the code, but to control it. Where `go` does not mean `golang`. When I designed `goctl`, I wanted to use her to free our hands 👈
### api generation
| Name | Function | Example |
| --- | --- | --- |
| `-o` | generate api file | `goctl api -o user.api` |
| `new` | Quickly create an api service | `goctl api new user` |
| `format` | api formatting`vscode`Using <br /> `-dir`Target Catalog <br /> `-iu`Whether to automatically update goctl <br /> `-stdin`Whether to read data from standard input | |
| `validate` | Verify that the api file is valid <br/> `-api` Specify api file source | `goctl api validate -api user.api` |
| `doc` | generate doc markdown <br/> `-dir` specify directory | `goctl api doc -dir user` |
| `go` | Generate golang api service<br/>`-dir`Specify the generated code directory<br/>`-api`Specify api file source<br/>`-force`Whether to force an overwrite of an existing file<br/>`style`Specify filename naming stylegozero: Lowercasego_zero: Underline,GoZero: humps | |
| `java` | Generate code to access api service - java language<br/> `-dir` specify code storage directory<br/> `-api` specify api file source | |
| `ts` | Generate code to access api service - ts language<br/>`-dir`Specify the code storage directory<br/>`-api`Specify api file source<br/>`webapi`<br/>`caller`<br/>`unwrap` | |
| `dart` | generate access to api service code-dart language<br/> `-dir` specify code storage directory<br/> `-api` specify api file source | |
| `kt` | Generate code to access api services - kotlin language<br/>`-dir`Specify the code storage directory<br/>`-api`Specify api file source<br/>`pkg`Specify package name | |
| `plugin` | `-plugin`Executable files<br/>`-dir`Code storage target folder<br/>`-api`api source code file<br/>`-style`File name naming formatting | |
### rpc generation
| Name | Function | Example |
| --- | --- | --- |
| `new` | Quickly generate an rpc service<br/>`-idea` identifies whether the command comes from the idea plugin, for idea plugin development use, terminal execution please ignore [optional parameter]<br/>`-style` specifies the filename naming style, gozero:lowercase,go_zero:underscore,GoZero:hump | |
| `template` | create a proto template file<br/>`-idea` identifies whether the command comes from the idea plugin, for use in idea plugin development, ignore [optional parameter]<br/>`-out,o` specifies the code storage directory | |
| `proto` | Generate rpc services based on proto<br/>`-src,s`Specify the proto file source<br/>`-proto_path,I`Specify the proto import lookup directory, protoc native command, please refer to protoc -h to see the specific usage<br/>`-dir,d`Specify the code storage directory<br/>`-idea`Identifies whether the command comes from the idea plugin, for idea plugin development use, terminal execution please ignore [optional parameter]<br/>`-style`Specify filename naming style, gozero:lowercase,go_zero:underscore,GoZero:hump | |
| `model` | Model layer code operation<br/><br/>`mysql` generates model code from mysql<br/>&emsp;&emsp;`ddl` specifies data source to generate model code for ddl file<br/>&emsp;&emsp;&emsp;&emsp;`-src,s` specifies the sql file source containing ddl, supports wildcard matching<br/>&emsp;&emsp;&emsp;&emsp;`-dir,d` specifies the code storage directory<br/> &emsp;&emsp;&emsp;&emsp;`-style` specifies the file name naming style, gozero: lowercase, go_zero: underscore, GoZero: camel<br/>&emsp;&emsp;&emsp;&emsp;`-cache,c` whether to generate code With redis cache logic, bool value<br/>&emsp;&emsp;&emsp;&emsp;`-idea` identifies whether the command comes from the idea plug-in and is used for idea plug-in development. Please ignore the terminal execution [optional parameter]<br/ >&emsp;&emsp;`datasource`Specify data source to generate model code from database link<br/>&emsp;&emsp;&emsp;&emsp;`-url`Specify database link<br/>&emsp;&emsp;&emsp;&emsp;`- table,t` specifies the table name, supports wildcards<br/>&emsp;&emsp;&emsp;&emsp;`-dir,d` specifies the code storage directory<br/>&emsp;&emsp;&emsp;&emsp;`-style` specifies the file Name naming style, gozero: lowercase, go_zero: underscore, GoZero: camel case<br/>&emsp;&emsp;&emsp;&emsp;`-cache,c` whether the generated code has redis cache logic, bool value<br/>&emsp;&emsp;&emsp;&emsp;`-idea` identifies whether the command comes from the idea plug-in and is used for the development and use of the idea plug-in. Please ignore the [optional parameter] for terminal execution.<br/><br/>`mongo` generates model code from mongo< br/>&emsp;&emsp;`-type,t` specifies the name of Go Type<br/>&emsp;&emsp;`-cache,c` whether the generated code has redis cache logic, bool value, default no<br/>&emsp; &emsp;`-dir,d` specifies the code generation directory<br/>&emsp;&emsp;`-style` refers to Specify the file name naming style, gozero: lowercase, go_zero: underscore, GoZero: camel case | |
### model generation
| Name | Function | Example |
| --- | --- | --- |
| `mysql` | Generate model code from mysql<br/>&emsp;&emsp;`ddl` Specify data source to generate model code for ddl file<br/>&emsp;&emsp;&emsp;&emsp;`-src,s` Specify include The sql file source of ddl supports wildcard matching<br/>&emsp;&emsp;&emsp;&emsp;`-dir,d` specifies the code storage directory<br/>&emsp;&emsp;&emsp;&emsp;`-style` specifies the file name Naming style, gozero: lowercase, go_zero: underscore, GoZero: camel case<br/>&emsp;&emsp;&emsp;&emsp;`-cache,c` whether the generated code has redis cache logic, bool value<br/>&emsp;&emsp; &emsp;&emsp;`-idea` identifies whether the command comes from the idea plug-in and is used for the development and use of the idea plug-in, please ignore the terminal execution [optional parameter]<br/>&emsp;&emsp;`datasource` specifies that the data source is generated from the database link model code<br/>&emsp;&emsp;&emsp;&emsp;`-url` specifies the database link<br/>&emsp;&emsp;&emsp;&emsp;`-table,t` specifies the table name, supports wildcards<br/>&emsp ;&emsp;&emsp;&emsp;`-dir,d` specifies the code storage directory<br/>&emsp;&emsp;&emsp;&emsp;`-style` specifies the file name naming style, gozero: lowercase, go_zero: underscore, GoZero: camel case <br/>&emsp;&emsp;&emsp;&emsp;`-cache,c` whether the generated code has redis cache logic, bool value<br/>&emsp;&emsp;&emsp;&emsp;`-idea` identifies whether the command comes from idea Plug-in, used for the development and use of idea plug-in, please ignore the terminal execution [optional parameter] | |
| `mongo` | Generate model code from mongo<br/>&emsp;&emsp;`-type,t` Specify Go Type name<br/>&emsp;&emsp;`-cache,c` Whether the generated code has redis cache logic, bool value, default no<br/>&emsp;&emsp;`-dir,d` specifies the code generation directory<br/>&emsp;&emsp;`-style` specifies the file name naming style, gozero: lowercase, go_zero: underscore, GoZero :Hump| |
### template operation
| Name | Function | Example |
| --- | --- | --- |
| `init` | Save `api`/`rpc`/`model` template | `goctl template init` |
| `clean` | clear cache template | `goctl template clean` |
| `update` | update template<br/>`-category,c` specify the group name to be updated `api`/`rpc`/`model` | `goctl template update -c api` |
| `revert` | restore the specified template file<br/>`-category,c` specify the name of the group to be updated `api`/`rpc`/`model`<br/>`-name,n` specify the name of the template file | |
### config configuration file generation
| Name | Function | Example |
| --- | --- | --- |
| `-path,p` | specify the configuration file directory | `goctl config -p user` |
### docker generates Dockerfile
| Name | Function | Example |
| --- | --- | --- |
| `-go` | specify main function file | |
| `-port` | Specify the exposed port | |
### upgrade goctl to update to the latest version
### kube Generate k8s deployment files
### deploy k8s deploymenet
| Name | Function | Example |
| --- | --- | --- |
| `-name` | service name | |
| `-namespace` | specify k8s namespace | |
| `-image` | specify the image name | |
| `-secret` | Specifies the k8s secret for getting the image | |
| `-requestCpu` | specify the default cpu allocation | |
| `-requestMem` | specify the default memory allocation | |
| `-limitCpu` | specify the maximum cpu allocation | |
| `-limitMem` | specify the maximum memory allocation | |
| `-o` | deployment.yaml output directory | |
| `-replicas` | specify the number of replicas | |
| `-revisions` | specify the number of records to keep for the release | |
| `-port` | specify the service port | |
| `-nodePort` | specifies the port to which the service is exposed | |
| `-minReplicas` | specify the minimum number of replicas | |
| `-maxReplicas` | specify the maximum number of replicas | |

View File

@ -0,0 +1,260 @@
---
sidebar_position: 7
---
# Load Balancer
### Background
When selecting a load balancing algorithm, we want to meet the following requirements.
- Have partitioning and server room scheduling affinity
- Choose the node with the lowest load possible each time
- Select the fastest responsive node possible each time
- No need for manual intervention on failed nodes
- When a node fails, the load balancing algorithm can automatically isolate the node
- When a failed node recovers, traffic distribution to that node can be automatically resumed
Translated with www.DeepL.com/Translator (free version)
### The core idea of the algorithm
#### p2c
`p2c` (Pick Of 2 Choices) Choose one of two: Randomly select two nodes among multiple nodes.
`go-zero` in will be randomly selected 3 times, and if the health condition of one of the selected nodes meets the requirement, the selection is interrupted and both nodes are adopted.
#### EWMA
`EWMA` (Exponentially Weighted Moving-Average) Exponential Moving Weighted Average: The weighting factor of each value decreases exponentially over time, and the closer the value is to the current moment, the larger the weighting factor is, reflecting the average value over the most recent period.
- Formula
![ewma](/img/ewma.png)
- Variable Explanation
- Vt: represents the EWMA value of the tth request
- Vt-1: represents the EWMA value of the t-1st request
- β: is a constant
#### EWMA Advantages of the algorithm
- Compared to the common algorithm for calculating average values, EWMA does not need to save all the past values, which significantly reduces the amount of computation and storage resources.
- The traditional algorithm for calculating the average is not sensitive to the network time consumption, while EWMA can adjust β by the frequency of requests to quickly monitor the network burr or more reflect the overall average.
- When the requests are more frequent, it means that the node network load is increasing, and we want to monitor the node processing time (which reflects the node load), we adjust β down accordingly. β is smaller, the EWMA value is closer to this time, and then we can quickly monitor the network burr;
- When the requests are less frequent, we adjust the β value relatively larger. In this way, the calculated EWMA value is closer to the average value.
#### β calculation
The `go-zero` uses the decay function model from Newton's cooling law to calculate the `β` value in the `EWMA` algorithm:
![ewma](/img/β.png)
where `Δt` is the interval between two requests, `e`, `k` are constants
### Implementing a custom load balancer in gRPC
First we need to implement google.golang.org/grpc/balancer/base/base.go/PickerBuilder interface, this interface is when there is a service node update will call the interface's Build method
```go title="grpc-go/balancer/base/base.go"
type PickerBuilder interface {
// Build returns a picker that will be used by gRPC to pick a SubConn.
Build(info PickerBuildInfo) balancer.Picker
}
```
It also implements the google.golang.org/grpc/balancer/balancer.go/Picker interface. This interface mainly implements load balancing, picking a node for requests
```go title="grpc-go/balancer/balancer.go"
type Picker interface {
Pick(info PickInfo) (PickResult, error)
}
```
Finally, register our implemented load balancer with the load balancing map
### The main logic of go-zero's load balancing implementation
- At each node update, `gRPC` will call the `Build` method, where all the node information is stored in `Build`.
- `gRPC` calls the `Pick` method to fetch nodes when it fetches nodes to process requests. `go-zero` implements the `p2c` algorithm in the `Pick` method to pick the node and calculate the load from the `EWMA` value of the node and return the node with low load for gRPC to use.
- At the end of the request `gRPC` calls the `PickResult.Done` method, in which `go-zero` stores the information about the time spent on this request and calculates the `EWMA` value and saves it for the next request to calculate the load and so on.
### Load Balancing Code Analysis
#### Save all node information of the service
We need to keep information about the time taken by the node to process this request, `EWMA`, etc. `go-zero` has designed the following structure for each node.
```go title="go-zero/zrpc/internal/balancer/p2c/p2c.go"
type subConn struct {
addr resolver.Address
conn balancer.SubConn
lag uint64 // Used to save ewma values
inflight int64 // Used to keep the total number of requests being processed by the current node
success uint64 // Used to identify the health status of this connection over time
requests int64 // Used to store the total number of requests
last int64 // Used to save the last request time, used to calculate the ewma value
pick int64 // Save the last selected point in time
}
```
#### `p2cPicker` implements the `balancer.Picker` interface, and `conns` holds information about all nodes of the service
```go title="go-zero/zrpc/internal/balancer/p2c/p2c.go"
type p2cPicker struct {
conns []*subConn // Save information about all nodes
r *rand.Rand
stamp *syncx.AtomicDuration
lock sync.Mutex
}
```
#### `gRPC` calls the `Build` method when a node is updated, passing in all the node information, where we save each node information in a subConn structure. Here we save each node information in a subConn structure and merge them together in a `p2cPicker` structure
```go title="go-zero/zrpc/internal/balancer/p2c/p2c.go:42"
func (b *p2cPickerBuilder) Build(info base.PickerBuildInfo) balancer.Picker {
......
var conns []*subConn
for conn, connInfo := range readySCs {
conns = append(conns, &subConn{
addr: connInfo.Address,
conn: conn,
success: initSuccess,
})
}
return &p2cPicker{
conns: conns,
r: rand.New(rand.NewSource(time.Now().UnixNano())),
stamp: syncx.NewAtomicDuration(),
}
}
```
#### Randomly selected node information is divided into three cases here:
The main implementation code is as follows.
```go title="go-zero/zrpc/internal/balancer/p2c/p2c.go:80"
switch len(p.conns) {
case 0: // No node, return error
return emptyPickResult, balancer.ErrNoSubConnAvailable
case 1: // There is a node, return this node directly
chosen = p.choose(p.conns[0], nil)
case 2: // There are two nodes, calculate the load and return the node with the lower load
chosen = p.choose(p.conns[0], p.conns[1])
default: // There are multiple nodes, p2c picks two nodes, compares the load of these two nodes, and returns the node with the lower load
var node1, node2 *subConn
// 3 times random selection of two nodes
for i := 0; i < pickTimes; i++ {
a := p.r.Intn(len(p.conns))
b := p.r.Intn(len(p.conns) - 1)
if b >= a {
b++
}
node1 = p.conns[a]
node2 = p.conns[b]
// If the selected node meets the health requirements this time, break the selection
if node1.healthy() && node2.healthy() {
break
}
}
// Compare the load of the two nodes and choose the one with the lower load
chosen = p.choose(node1, node2)
}
````
- There is only one service node, which is returned directly for gRPC use
- There are two service nodes, calculate the load by EWMA value, and return the node with low load for gRPC
- With multiple service nodes, two nodes are selected by the p2c algorithm, the load is compared, and the node with the lower load is returned for gRPC
#### `load` calculates the load of the node
The `choose` method above will call the `load` method to calculate the node load.
The formula for calculating the load is: `load = ewma * inflight`
Here is a brief explanation: `ewma` is the average request time, `inflight` is the number of requests being processed by the current node, and multiplying them together roughly calculates the network load of the current node.
```go
func (c *subConn) load() int64 {
// Calculate the load of the node by EWMA; add 1 to avoid the case of 0
lag := int64(math.Sqrt(float64(atomic.LoadUint64(&c.lag) + 1)))
load := lag * (atomic.LoadInt64(&c.inflight) + 1)
if load == 0 {
return penalty
}
return load
}
```
#### End of request, update information such as `EWMA` of the node
```go
func (p *p2cPicker) buildDoneFunc(c *subConn) func(info balancer.DoneInfo) {
start := int64(timex.Now())
return func(info balancer.DoneInfo) {
// Number of requests being processed minus 1
atomic.AddInt64(&c.inflight, -1)
now := timex.Now()
// Save the time point at the end of this request and retrieve the time point at the last request
last := atomic.SwapInt64(&c.last, int64(now))
td := int64(now) - last
if td < 0 {
td = 0
}
// Calculation of β in EWMA algorithm using the decay function model in Newton's cooling law
w := math.Exp(float64(-td) / float64(decayTime))
// Save the elapsed time of this request
lag := int64(now) - start
if lag < 0 {
lag = 0
}
olag := atomic.LoadUint64(&c.lag)
if olag == 0 {
w = 0
}
// Calculating EWMA values
atomic.StoreUint64(&c.lag, uint64(float64(olag)*w+float64(lag)*(1-w)))
success := initSuccess
if info.Err != nil && !codes.Acceptable(info.Err) {
success = 0
}
osucc := atomic.LoadUint64(&c.success)
atomic.StoreUint64(&c.success, uint64(float64(osucc)*w+float64(success)*(1-w)))
stamp := p.stamp.Load()
if now-stamp >= logInterval {
if p.stamp.CompareAndSwap(stamp, now) {
p.logStats()
}
}
}
}
```
- Subtract 1 from the total number of requests being processed by the node
- Save the time point at which the processing of the request ended, which is used to calculate the difference between the last request processed by the node and to calculate the β value in the EWMA
- calculate the time taken for this request and calculate the EWMA value and save it to the lag attribute of the node
- Calculates the health status of the node and stores it in the success attribute of the node

View File

@ -0,0 +1,132 @@
---
sidebar_position: 5
---
# Circuit Breaker
### Circuit Breaker Principle
The fuse mechanism is actually a reference to the protection mechanism of fuses in our daily life. When a circuit is overloaded, the fuse will automatically break, thus ensuring that the electrical appliances in the circuit will not be damaged. The fuse mechanism in service governance refers to the fact that if the error rate returned by the invoked party exceeds a certain threshold when a service call is initiated, then subsequent requests will not be initiated, but the error will be returned directly by the invoker.
In this model, the service caller maintains a state machine for each invoked service (invocation path), in which there are three states:
* Closed: In this state, we need a counter to record the number of failed calls and the total number of requests. If the failure rate reaches a preset threshold within a certain time window, it switches to the disconnected state, which opens a timeout period, and switches to the semi-closed state when the time is reached, which gives the system a chance to fix the The timeout is to give the system a chance to fix the error that caused the call to fail, in order to return to the normal working state. In the off state, the invocation error is time-based and is reset at specific intervals, which prevents accidental errors from causing the fuse to go into the off state.
* Open: In this state, the request will immediately return an error, will generally start a timeout timer, when the timer times out, the state switches to a semi-open state, you can also set a timer to regularly detect whether the service is restored.
* Half-Open: In this state, the application is allowed to send a certain number of requests to the called service, if these calls are normal, then the called service can be considered to have recovered normally, at this time the fuse switches to the closed state, and the count needs to be reset at the same time. If this part still has call failures, then it is considered that the called party still has not recovered, the fuse will switch to the closed state and then reset the counter. The half-open state can effectively prevent the service that is recovering from being broken again by a sudden large number of requests.
![breaker](/img/breaker.png)
### Adaptive Circuit Breaker
`go-zero` referenced [`Google Sre`](https://landing.google.com/sre/sre-book/chapters/handling-overload/)The principle of the algorithm is as follows:
When a service is overloaded, a request should arrive and be quickly rejected, returning a "service overload" type error, which should consume far fewer resources than actually processing the request. However, this logic does not really apply to all requests. For example, rejecting a request to perform a simple memory query may consume about the same amount of memory as actually performing the request (since the main consumption here is in the application layer protocol parsing, where the result generation part is simple). Even if, in some cases, denying a request saves a lot of resources, sending these denial replies still consumes a certain amount of resources. If the number of rejection replies is also large, these resource consumptions may also be significant. In this case, it is possible that the service will go into overload as well while it is busy constantly sending rejection replies.
Client-side throttling solves this problem. When a client detects that a large portion of recent request errors are due to "service overload" errors, the client starts to limit the speed of requests on its own, limiting the number of requests it generates. Requests that exceed this request count limit fail directly in the local reply, and are not actually sent to the network layer.
We use a technique called adaptive throttling to implement client throttling. Specifically, each client records the following information for the past two minutes.
* requests The total number of all requests made by the application layer code, referring to the application code running on top of the adaptive throttling system.
* accept Number of requests accepted by back-end tasks.
In the regular case, these two values are equal. As the back-end tasks start rejecting requests, the number of requests accepted starts to be smaller than the number of requests. The client can continue sending requests until requests=K * accepts, once this limit is exceeded, the client starts throttling itself and new requests are rejected directly locally (within the client) with a certain probability, which is calculated using the following metric.
![breaker](/img/breaker_algo.png)
As the client begins to reject requests on its own, requests continue to rise while continuing past accepts. here, while it may seem counterintuitive that locally rejected requests don't actually reach the backend, this is precisely the point of this algorithm. As the client sends requests faster (relative to the speed at which the backend accepts them), we want to increase the probability that a request is dropped locally.
We found that the adaptive throttling algorithm works well in practice and can maintain a very stable request rate overall. The back-end service can essentially maintain a 50% processing rate even in the case of mega-overloads. A major advantage of this approach is that the client relies entirely on local information to make decisions, while the implementation algorithm is relatively simple: no additional dependencies are added and no latency is affected.
For systems where the resources consumed to process the request and the resources used to reject it are not very different, it may not be reasonable to allow 50% of the resources to be used to send the rejection request. In this case, the solution is simple: by modifying the multiplicity K of the algorithm's accepts in the client (e.g., 2) it is possible to solve.
* Decreasing this multiplier will make the adaptive throttling algorithm more aggressive.
* Increasing this multiplier will make the algorithm less aggressive.
For example, suppose the upper limit of client requests is adjusted from request=2 * accepts to request=1.1* accepts, which means that only 1 out of every 10 backend requests will be rejected. The general recommendation is to use K=2, which wastes a certain amount of backend resources by allowing the backend to receive more requests than expected, but speeds up the delivery of backend state to the client. For example, after the backend stops rejecting requests from that client, the time taken by all clients to detect the change is reduced.
```go title="go-zero/core/breaker/googlebreaker.go"
type googleBreaker struct {
k float64
stat *collection.RollingWindow
proba *mathx.Proba
}
```
Algorithm Implementation
```go title="go-zero/core/breaker/googlebreaker.go"
func (b *googleBreaker) accept() error {
accepts, total := b.history()
weightedAccepts := b.k * float64(accepts)
dropRatio := math.Max(0, (float64(total-protection)-weightedAccepts)/float64(total+1))
if dropRatio <= 0 {
return nil
}
if b.proba.TrueOnProba(dropRatio) {
return ErrServiceUnavailable
}
return nil
}
```
The doReq method is called each time a request is initiated. In this method, the first check is made by accepting whether the fuse is triggered. acceptable is used to determine which errors count towards the failure count, as defined below.
```go title="go-zero/zrpc/internal/codes/accept.go"
func Acceptable(err error) bool {
switch status.Code(err) {
case codes.DeadlineExceeded, codes.Internal, codes.Unavailable, codes.DataLoss:
return false
default:
return true
}
}
```
If the request is normal, both the number of requests and the number of requests accepted will be added by one by markSuccess, if the request is not normal, only the number of requests will be added by one
```go title="go-zero/core/breaker/googlebreaker.go"
func (b *googleBreaker) doReq(req func() error, fallback func(err error) error, acceptable Acceptable) error {
if err := b.accept(); err != nil {
if fallback != nil {
return fallback(err)
} else {
return err
}
}
defer func() {
if e := recover(); e != nil {
b.markFailure()
panic(e)
}
}()
err := req()
if acceptable(err) {
b.markSuccess()
} else {
b.markFailure()
}
return err
}
```
### Usage examples
go-zero framework fuse protection is enabled by default, no additional configuration is required
:::tip
If you want to implement fusion in a non-go-zero project, you can also port it over separately
:::
The following error is reported when the fuse is triggered
```go title="go-zero/core/breaker/breaker.go"
var ErrServiceUnavailable = errors.New("circuit breaker is open")
````

View File

@ -0,0 +1,181 @@
---
sidebar_position: 4
---
# Cache
### Foreword
Think about it: which part of the server side is most likely to be the first bottleneck when we have a surge in traffic? I believe that most people will encounter is the database first can not carry, the volume up, the database slow query, or even card dead. At this point, the upper service has how strong governance capabilities are not helpful.
So we often say to see a system architecture design is good, many times look at the cache design on how to know. We once encountered such a problem, before I joined, our service is no cache, although the traffic is not high, but every day to the peak traffic period, we will be particularly nervous, down several times a week, the database directly killed, and then nothing can only restart; I was still a consultant, look at the system design, can only save the emergency, let everyone first add the cache, but Due to the lack of knowledge about caching and the confusion of the old system, every business developer would tear the cache according to their own way. This led to the problem that the cache was used, but the data was scattered and there was no way to ensure data consistency. This was indeed a rather painful experience that should resonate with everyone's memories.
I then pushed back the whole system and redesigned it, in which the cache part of the architecture design plays a very obvious role, so I have today's sharing.
I've divided it into the following sections to discuss with you.
- Caching System FAQ
- Caching and automatic management of single-line queries
- Multi-line query caching mechanism
- Distributed caching system design
- Caching code automation practices
The issues and knowledge involved in caching systems are relatively numerous, and I will discuss them in the following areas.
- Stability
- Correctness
- Observability
- Specification landing and tool building
### Cache system stability
![system stability](/img/system-stability.png)
In terms of cache stability, basically all cache-related articles and shares on the web will talk about three key points.
- Cache Penetration
- Cache breakdown
- Cache Avalanche
Why talk about cache stability in the first place? You can recall when we introduce caching? Usually, caching is introduced when the DB is under pressure or even frequently hit and hung, so we first introduced the caching system to solve the stability problem.
### Cache Penetration
![Cache Penetration](/img/cache-penetration.png)
Cache penetration exists because the request does not exist data, from the figure we can see that the same data request 1 will go to the cache first, but because the data does not exist, so there is certainly no cache, then it falls to the DB, the same data request 2, request 3 will also fall through the cache to the DB, so when a large number of requests for non-existent data DB pressure will be particularly large, especially may be malicious requests to defeat (unsuspecting people find a data does not exist, and then a large number of requests launched to this non-existent data).
The solution of `go-zero` is that we also store a placeholder in the cache for non-existent data requests for a short period of time (say one minute), so that the number of DB requests for the same non-existent data will be decoupled from the actual number of requests, and of course the placeholder can be removed on the business side when new data is added to ensure that the new data can be queried immediately.
### Cache Breakdown
The reason for cache breakdown is the expiration of hot data, because it is hot data, so once it expires, there may be a large number of requests for the hot data at the same time, at this time, if all the requests can not find the data in the cache, if they fall to the DB at the same time, then the DB will be under huge pressure instantly, or even directly stuck.
The solution to `go-zero` is that for the same data we can rely on `core/syncx/SharedCalls` to ensure that only one request falls to the DB at the same time, and other requests for the same data wait for the first request to return and share the result or error, depending on the concurrency scenario, we can choose to use in-process locks (not very concurrent) or distributed locks (very concurrent). Very high concurrency), or distributed locks (very high concurrency). After all, introducing distributed locks will increase complexity and cost, drawing on Occam's razor: do not add entities unless necessary.
![cache breakdown](/img/cache-breakdown.png)
Let's take a look at the cache breakdown protection process together in the figure above, where we use different colors to indicate different requests.
- Green request arrives first, finds no data in the cache, and goes to DB to query
- The pink request arrives, requests the same data, finds that the request is already being processed, and waits for the green request to return, singleflight mode
- The green request returns, the pink request returns with the result shared by the green request
- Subsequent requests, such as blue requests, can get data directly from the cache
### Cache Avalanche
The reason for cache avalanche is that a large number of caches loaded at the same time have the same expiration time, and a large number of caches expire in a short period of time when the expiration time is reached, which will make many requests fall to the DB at the same time, thus causing the DB to spike in pressure and even jam.
For example, in the epidemic online teaching scenario, high school, middle school and elementary school are divided into several time periods to start classes at the same time, then there will be a large amount of data loaded at the same time and the same expiration time is set, when the expiration time arrives there will be peer-to-peer DB request wave after wave, such pressure wave will be passed to the next cycle and even appear superimposed.
The solution to `go-zero` is:
- Use distributed caching to prevent cache avalanches due to single point of failure
- Add a standard deviation of 5% to the expiration time, 5% is the empirical value of the p-value in the hypothesis test (interested readers can check for themselves)
![cache avalanche](/img/cache-avalanche.png)
Let's do an experiment, if we use 10,000 data and the expiration time is set to 1 hour and the standard deviation is set to 5%, then the expiration time will be more evenly distributed between 3400 and 3800 seconds. If our default expiration time is 7 days, then it will be evenly distributed within 16 hours with 7 days as the center point. This would be a good way to prevent the cache avalanche problem.
### Cache Correctness
The original purpose of introducing cache is to reduce DB pressure and increase system stability, so we focus on the stability of the cache system at first. Once the stability is solved, we generally face the problem of data correctness, and may often encounter "why does it still show the old one when the data is obviously updated? This kind of problem. This is what we often call the "cache data consistency" problem, and we will carefully analyze the reasons for it and how to deal with it.
### Common practices for data updates
First of all, we talk about data consistency on the premise that our DB update and cache deletion will not be treated as an atomic operation, because in a highly concurrent scenario, we can not introduce a distributed lock to bind the two as an atomic operation, if the binding will largely affect the concurrency performance, and increase the complexity of the system, so we will only pursue the ultimate consistency of data, and this article is only for non-pursuit of strong consistency requirements of highly concurrent scenarios, financial payments and other students judge for themselves.
There are two main categories of common data update methods, and the rest are basically variants of these two categories.
#### Delete the cache first, then update the database
![delete update](/img/delete-update.png)
This approach is to encounter a data update, we go to delete the cache first, and then go to update the DB, as shown in the figure on the left. Let's look at the flow of the whole operation.
- A request needs to update data, delete the corresponding cache first, not yet update DB
- B request to read the data
- B request to see no cache, go to read the DB and write the old data to the cache (dirty data)
- A request to update DB
You can see that request B writes dirty data to the cache. If this is a read more write less data, it is possible that the dirty data will exist for a longer period of time (either with subsequent updates or waiting for the cache to expire), which is not acceptable for business purposes.
#### Update the database first, then delete the cache
![update delete](/img/update-delete.png)
The right part of the above figure can see that between A update DB and delete cache B request will read the old data, because at this time A operation is not completed, and this time to read the old data is very short, can meet the data final consistency requirements.
The above figure can see that we are using the delete cache instead of the update cache for the following reasons.
![ab op](/img/ab-op.png)
When we do a delete operation, it doesn't matter whether A or B deletes first, because subsequent read requests will load the latest data from the DB; but when we do an update operation on the cache, it will be sensitive to whether A updates the cache first or B updates the cache first, and if A updates later, then there will be dirty data in the cache again, so go-zero only uses the delete cache method.
Let's take a look at the complete request processing flow together
![complete process](/img/complete-process.png)
Note: Different colors represent different requests.
- Request 1 updates the DB
- Request 2 queries the same data and returns the old data, this short time to return the old data is acceptable to meet the final consistency
- Request 1 deletes the cache
- Request 3 does not have it in the cache when the request comes again, it queries the database and writes back to the cache before returning the result
- Subsequent requests will read the cache directly
What should we do for the scenario below?
![caching scenarios](/img/caching-scenarios.png)
Let's analyze together several possible solutions to this problem.
- Use distributed locks to make each update an atomic operation. This method is the least desirable, which is equivalent to self-defeating, giving up the ability of high concurrency to pursue strong consistency, do not forget that my previous article stressed that "this series of articles only for the non-pursuit of strong consistency requirements of high concurrency scenarios, financial payments and other students judge for themselves", so this solution we first give up.
- Put A delete cache plus delay, for example, after 1 second before executing this operation. The bad thing about this is that in order to solve this very low probability, and let all the updates in 1 second can only get the old data. This approach is also not ideal and we don't want to use it.
- Change A to delete the cache here to set a special placeholder and have B set the cache using the redis setnx directive, then subsequent requests re-request the cache when they encounter this special placeholder. This approach is equivalent to adding a new state when deleting the cache, as we see in the following figure
![cache placeholder](/img/cache-placeholder.png)
Isn't it coming back around, because A request must force a cache or determine if the content is a placeholder when it encounters a placeholder. So this doesn't solve the problem either.
So let's see how go-zero reacts to this situation, and are we surprised that we choose not to handle this situation? So let's go back to the drawing board and analyze how this happens.
- The data for the read request is not cached (not loaded into the cache at all or the cache has been invalidated), triggering a DB read
- At this point comes an update operation on the data
- Need to meet this order: B request to read DB -> A request to write DB -> A request to delete cache -> B request to set cache
We all know that DB write operation needs to lock row records, which is a slow operation, while read operation does not need, so the probability of such situation is relatively low. And we have set the expiration time, the probability of encountering such a situation is extremely low in real scenarios, to really solve such problems, we need to ensure consistency through 2PC or Paxos protocol, I think this is not the method we want to use, too complicated!
The most difficult thing to do architecture I think is to know the trade-off (trade-off), to find the best balance of income is a very test of comprehensive ability.
### Cache Observability
The first two articles we solved the problem of cache stability and data consistency, at this time our system has fully enjoyed the value brought by the cache, solving the problem of zero to one, then we have to consider how to further reduce the cost of use, determine which cache brings actual business value, which can be removed to reduce server costs, which cache I need to increase server resources, what is the qps of each cache, how many hits, whether there is a need for further tuning, etc.
![cache log](/img/cache-log.png)
The above figure is the cache monitoring log of a service, we can see that this cache service has 5057 requests per minute, 99.7% of them hit the cache, only 13 of them fell to the DB, and the DB returned successfully. From this monitoring, we can see that this caching service reduces the pressure on DB by three orders of magnitude (90% hit is one order of magnitude, 99% hit is two orders of magnitude, and 99.7% is almost three orders of magnitude), so we can see that the benefits of this cache are quite good.
But if, on the other hand, the cache hit rate is only 0.3% then there is little gain, then we should remove this cache, one can reduce the complexity of the system (if not necessary, do not increase the entity well), the second is to reduce server costs.
If the qps of this service is particularly high (enough to put a lot of pressure on the DB), then if the cache hit rate is only 50%, which means we have reduced the pressure by half, we should consider increasing the expiration time to increase the cache hit rate according to the business situation.
If the qps of the service is particularly high (enough to put a lot of pressure on the cache) and the cache hit rate is also high, then we can consider increasing the qps that the cache can carry or adding in-process caching to reduce the pressure on the cache.
All of this is based on cache monitoring, and only when it is observable can we make further targeted tuning and simplification, and I always emphasize that "without metrics, there is no optimization".
### How do I get the cache to be used in a regulated way?
For those who know go-zero design ideas or have watched my videos, you may have an impression of what I often say about 'tools over conventions and documentation'.
For caching, there is a lot of knowledge, and each person's cache code will be very different, and it is very hard to get all the knowledge right. So how does go-zero solve this problem?
- By encapsulating as much of the abstracted generic solution as possible into the framework. This way the whole cache control process doesn't need to be bothered with, as long as you call the right method, there's no chance of error.
- The code from building the table sql to CRUD + Cache is generated by the tool in one click. It avoids the need to write a bunch of structure and control logic based on the table structure.
![cache generate](/img/cache-generate.png)
This is a `CRUD + Cache` generation description cut from go-zero's official example `bookstore`. We can provide `schema` to `goctl` with a specified table build `sql` file or `datasource`, and then `goctl`'s `model` subcommand can generate the required `CRUD + Cache` code with one click.
This ensures that everyone writes the same cache code, can tool generation be any different?

View File

@ -0,0 +1,219 @@
---
sidebar_position: 8
---
# Discovery
### What is Service Register Discovery
For students who are involved in microservices, the concepts of service registration and service discovery should not be too unfamiliar.
Simply put, when Service A needs to rely on Service B, we need to tell Service A where to invoke Service B. This is the problem to be solved by service registration and discovery.
![discovery](/img/discovery.png)
- Service B registers itself with the Service Registry, called Service Registration
- Service A's discovery of Service B's node information from Service Registry is called Service Discovery
### Service Register
Service registration is for the server side and is required after the service is started and is divided into several parts.
- Start-up registration
- Timed renewal
- Withdrawal
#### Start Register
When a service node is up, it needs to register itself to the `Service Registry` to make it easy for other nodes to discover itself. The registration needs to be done when the service is up and ready to accept requests, and an expiration date is set to prevent the process from being accessed even after an abnormal exit.
#### Scheduled Renewal
Scheduled renewals are equivalent to `keep alive`, telling the `Service Registry` periodically that it is still alive and can continue to serve.
#### Withdrawal
When the process exits, we should actively go to deregister information to facilitate the caller to distribute the request to another node in time. Meanwhile, go-zero ensures that even if a node exits without active deregistration by adaptive load balancing, the node can be taken off in time.
### Service Discovery
Service discovery is for the calling end and generally falls into two categories of issues.
- Stock acquisition
- Incremental Listening
There is also a common engineering problem of
- Responding to service discovery failures
When a service discovery service (such as etcd, consul, nacos, etc.) goes down, we do not modify the list of `endpoints` that we have already acquired, so that we can better ensure that the services that depend on etcd, etc., can still interact normally after they go down.
#### Stock Acquisition
![get data](/img/get-data.png)
When `Service A` starts, it needs to get the list of existing nodes of `Service B` from `Service Registry`: `Service B1`, `Service B2`, `Service B3`, and then select the appropriate nodes to send requests according to its own load balancing algorithm.
#### Incremental Listening
The above diagram already has `Service B1`, `Service B2`, `Service B3`, if `Service B4` is started, then we need to notify `Service A` that there is a new node. As shown in the figure.
![new node](/img/new-node.png)
#### Responding to service discovery failures
For the service caller, we all cache a list of available nodes in memory. Whether we use `etcd`, `consul` or `nacos`, we may face service discovery cluster failures, take etcd as an example, when we encounter etcd failure, we need to freeze the node information of Service B without changing it, we must not empty the node information at this time, once it is empty, we cannot get it, and at this time, the nodes of Service B nodes are likely to be normal, and go-zero will automatically isolate and restore the failed nodes.
![discovery trouble](/img/discovery-trouble.png)
The basic principle of service registration and service discovery is roughly the same, but of course it is more complicated to implement, so let's take a look at what service discovery methods are supported in `go-zero`.
### go-zero's internal service discovery
`go-zero` supports three service discovery methods by default:
- Direct Connect
- etcd-based service discovery
- Service discovery based on kubernetes endpoints
#### Direct connection
Direct connection is the simplest way, when our service is simple enough, such as a single machine can carry our business, we can directly use only this way.
![direct connection](/img/direct-connection.png)
Just specify `endpoints` directly in the `rpc` configuration file, e.g.
```go
Rpc:
Endpoints:
- 192.168.0.111:3456
- 192.168.0.112:3456
````
The `zrpc` caller will then allocate the load to both nodes, and when one of the nodes has a problem `zrpc` will be automatically removed, and the load will be allocated again when the node recovers.
The disadvantage of this approach is that the nodes cannot be added dynamically, and each time a new node is added, the caller's configuration needs to be modified and restarted.
#### etcd-based service discovery
Once we have a certain scale of services, because a service may be dependent on many services, we need to be able to dynamically add and remove nodes without having to modify many caller configurations and restart them.
Common service discovery schemes are `etcd`, `consul`, `nacos`, etc.
![discovery etcd](/img/discovery-etcd.png)
`go-zero` has a built-in service discovery scheme based on `etcd`, which is used as follows.
```go
Rpc:
Etcd:
Hosts:
- 192.168.0.111:2379
- 192.168.0.112:2379
- 192.168.0.113:2379
Key: user.rpc
```
- Hosts is the etcd cluster address
- Key is the key that the service is registered with
#### Kubernetes Endpoints-based Service Discovery
If our services are deployed on a `Kubernetes` cluster, Kubernetes itself manages the cluster state through its own `etcd`, and all services register their node information to `Endpoints` objects, so we can directly give the `deployment` permission to read the cluster's ` Endpoints` object to get the node information.
![discovery k8s](/img/discovery-k8s.png)
- Each `Pod` of `Service B` registers itself to the `Endpoints` of the cluster when it starts
- Each `Pod` of `Service A` can get the node information of `Service B` from the `Endpoints` of the cluster when it starts
- When the node of `Service B` changes, `Service A` can sense it through the `Endpoints` of the `watch` cluster
Before this mechanism can work, we need to configure the `pod` within the current `namespace` to have access to the cluster `Endpoints`, where there are three concepts.
- ClusterRole
- Defines cluster-wide permission roles, not controlled by namespace
- ServiceAccount
- Defines the namespace-wide service account
- ClusterRoleBinding
- Bind the defined ClusterRole to the ServiceAccount of different namespaces
The specific Kubernetes configuration file can be found here, where namespace is modified as needed.
Note: Remember to check if these configurations are in place when you start up and don't have access to Endpoints :)
zrpc's `Kubernetes Endpoints` based service discovery is used as follows.
```go
Rpc:
Target: k8s://mynamespace/myservice:3456
```
where
- `mynamespace`: the `namespace` where the invoked `rpc` service is located
- `myservice`: the name of the called `rpc` service
- `3456`: the port of the called `rpc` service
Be sure to add `serviceAccountName` when creating the `deployment` profile to specify which `ServiceAccount` to use, as in the following example.
```go
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine-deployment
labels:
app: alpine
spec:
replicas: 1
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
serviceAccountName: endpoints-reader
containers:
- name: alpine
image: alpine
command:
- sleep
- infinity
```
Note that `serviceAccountName` specifies which `ServiceAccount` is used for the `pod` created by the `deployment`.
After both `server` and `client` are deployed to the `Kubernetes` cluster, you can restart all `server` nodes on a rolling basis with the following command
```go
kubectl rollout restart deploy -n adhoc server-deployment
```
Use the following command to view the `client` node log.
```go
kubectl -n adhoc logs -f deploy/client-deployment --all-containers=true
```
You can see that our service discovery mechanism follows the changes to the `server` node perfectly and there are no abnormal requests during the service update.
:::tip
The full code example is available at https://github.com/zeromicro/zero-examples/tree/main/discovery/k8s
:::

View File

@ -0,0 +1,437 @@
---
sidebar_position: 6
---
# Overload Protection
### Why you need to downgrade your load
In a microservice cluster, the invocation link is complex, and as a service provider, it needs a mechanism to protect itself to prevent the caller from overwhelming itself with mindless invocations and ensure the high availability of its own services.
The most common protection mechanism is flow limiting mechanism, the premise of using flow limiter is to know the maximum number of concurrency that it can handle, and generally get the maximum number of concurrency by pressure testing before going online, and the flow limiting parameters of each interface are different in the daily request process, while the system has been constantly iterating and its processing capacity often changes, so it needs to pressure test and adjust the flow limiting parameters before going online each time. The parameters become very tedious.
So is there a more concise flow-limiting mechanism that can achieve maximum self-protection?
### What is Adaptive Load Shedding
Adaptive load shedding protects the service itself very intelligently and dynamically determines whether load shedding is needed based on the service's own system load.
Design Objective.
- Ensure that the system does not get bogged down.
- Maintain system throughput while the system is stable.
The key then is how to measure the load on the service itself?
Judging high load depends on two main indicators.
- Whether the cpu is overloaded.
- Whether the maximum concurrency is overloaded.
When the above two points are met at the same time, it means that the service is in a high load state, then the adaptive down load.
It should also be noted that high concurrency scenarios cpu load, concurrency often fluctuate greatly, from the data we call this phenomenon burr, burr phenomenon may lead to the system has been frequent automatic down load operation, so we generally get the average value of indicators over a period of time to make the indicators more smooth. The implementation can be done by accurately recording the metrics over a period of time and then directly calculating the average value, but it takes up some system resources.
There is a statistical algorithm: exponential moving average, which can be used to estimate the local average of variables, so that the update of variables is related to the historical values over time, and the average can be estimated without recording all the historical local variables, which saves valuable server resources.
The principle of the sliding average algorithm is explained very clearly in this article.
The variable V is denoted as Vt at time t and θt is the value of the variable V at time t. That is, Vt = θt when the sliding average model is not used, and after using the sliding average model, Vt is updated by the following equation.
```shell
Vt=β⋅Vt1+(1β)⋅θt
```
- Vt = θt for β = 0
- β = 0.9, which is approximately the average of the last 10 θt values
- β = 0.99, which is approximately the average of the last 100 θt values
### Code implementation
Next, let's look at the code implementation of go-zero adaptive downgrading.
![load](/img/load.png)
Adaptive load shedding interface definition.
```go title="core/load/adaptiveshedder.go"
// Callback functions
Promise interface {
// Callback to this function on successful request
Pass()
// Callback to this function on request failure
Fail()
}
// Definition of the drop-load interface
Shedder interface {
// Drop check
// 1. Allow the call, you need to manually execute Promise.accept()/reject() to report the actual execution task structure
// 2. Reject the call and it will return err: Service Overloaded Error ErrServiceOverloaded
Allow() (Promise, error)
}
```
The interface definition is very concise meaning that it is actually very simple to use, exposing an `Allow()(Promise,error) to the outside world.
Example of go-zero usage.
The business only needs to call the method to determine whether to dowload, if it is dowloaded then directly end the process, otherwise the execution of the business finally use the return value Promise according to the results of the implementation of the callback results can be.
```go
func UnarySheddingInterceptor(shedder load.Shedder, metrics *stat.Metrics) grpc.UnaryServerInterceptor {
ensureSheddingStat()
return func(ctx context.Context, req interface{}, info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler) (val interface{}, err error) {
sheddingStat.IncrementTotal()
var promise load.Promise
// Check for downgrades
promise, err = shedder.Allow()
// Drop load, record relevant logs and metrics
if err != nil {
metrics.AddDrop()
sheddingStat.IncrementDrop()
return
}
// Final callback execution result
defer func() {
// Execution Failure
if err == context.DeadlineExceeded {
promise.Fail()
// Successful implementation
} else {
sheddingStat.IncrementPass()
promise.Pass()
}
}()
// Implementation of business methods
return handler(ctx, req)
}
}
```
Definition of interface implementation classes.
There are three main types of properties
- cpu load threshold: exceeding this value means the cpu is in a high load state.
- Cooling period: If the service has been down loaded before, it will enter the cooling period, in order to prevent the load from being down during the down load process and immediately pressurize it to cause back and forth jitter. Because it takes some time to reduce the load, you should continue to check whether the number of concurrency exceeds the limit during the cooling-off period, and continue to discard requests if the limit is exceeded.
- Concurrency number: the number of concurrency currently being processed, the average number of concurrency currently being processed, and the number of requests and response time in the recent period, the purpose is to calculate whether the number of concurrency currently being processed is greater than the maximum number of concurrency that can be carried by the system.
```go
// option parameter mode
ShedderOption func(opts *shedderOptions)
// Optional configuration parameters
shedderOptions struct {
// Sliding time window size
window time.Duration
// Number of sliding time windows
buckets int
// cpu load threshold
cpuThreshold int64
}
// Adaptive load drop structure, need to implement Shedder interface
adaptiveShedder struct {
// cpu load threshold
// Higher than the threshold means high load needs to be downgraded to ensure service
cpuThreshold int64
// How many barrels in 1s
windows int64
// Number of concurrent
flying int64
// Sliding and smoothing the number of concurrent
avgFlying float64
// Spin locks, one service shares one drop load
// Locks must be applied when counting the number of requests currently being processed
// lossless concurrency for better performance
avgFlyingLock syncx.SpinLock
// Last rejection time
dropTime *syncx.AtomicDuration
// Have you been rejected recently
droppedRecently *syncx.AtomicBool
// Request statistics, with a sliding time window to record metrics over the most recent period
passCounter *collection.RollingWindow
// Response time statistics by sliding time windows to record metrics over the most recent period
rtCounter *collection.RollingWindow
}
```
Adaptive load shedding constructor
```go
func NewAdaptiveShedder(opts ...ShedderOption) Shedder {
// To ensure code uniformity
// To return the default empty implementation when the developer closes, for code uniformity
// go-zero uses this design in many places, such as Breaker, the logging component
if !enabled.True() {
return newNopShedder()
}
// options mode sets optional configuration parameters
options := shedderOptions{
// Default statistics for the last 5s
window: defaultWindow,
// Default barrel quantity 50
buckets: defaultBuckets,
// cpu load
cpuThreshold: defaultCpuThreshold,
}
for _, opt := range opts {
opt(&options)
}
// Calculate each window interval time, default is 100ms
bucketDuration := options.window / time.Duration(options.buckets)
return &adaptiveShedder{
// cpu load
cpuThreshold: options.cpuThreshold,
// How many sliding window cells are included in 1s time
windows: int64(time.Second / bucketDuration),
// Last rejection time
dropTime: syncx.NewAtomicDuration(),
// Have you been rejected recently
droppedRecently: syncx.NewAtomicBool(),
// qps statistics, sliding time window
// Ignore the current writing window (bucket), incomplete time period may lead to data anomalies
passCounter: collection.NewRollingWindow(options.buckets, bucketDuration,
collection.IgnoreCurrentBucket()),
// Response time statistics, sliding time window
// Ignore the current writing window (bucket), incomplete time period may lead to data anomalies
rtCounter: collection.NewRollingWindow(options.buckets, bucketDuration,
collection.IgnoreCurrentBucket()),
}
}
```
Drop Load Check Allow().
Check whether the current request should be discarded, is discarded business side needs to directly interrupt the request to protect the service, also means that the downgrade takes effect while entering the cooling off period. If it is allowed, it returns promise and waits for the business side to execute the callback function to perform the metrics statistics.
```go
// Down load check
func (as *adaptiveShedder) Allow() (Promise, error) {
// Check if the request was discarded
if as.shouldDrop() {
// Set drop time
as.dropTime.Set(timex.Now())
// Recently dropped
as.droppedRecently.Set(true)
// Return to Overload
return nil, ErrServiceOverloaded
}
// Number of requests being processed plus 1
as.addFlying(1)
// Each allowed request here returns a new promise object
// The promise holds the drop pointer object internally
return &promise{
start: timex.Now(),
shedder: as,
}, nil
}
```
Check if shouldDrop() should be dropped.
```go
// Whether the request should be discarded
func (as *adaptiveShedder) shouldDrop() bool {
// The current cpu load exceeds the threshold
// Service should continue to check load and try to discard requests while on cooldown
if as.systemOverloaded() || as.stillHot() {
// Check if the concurrency being processed exceeds the current maximum number of concurrency that can be carried
// Discard the request if it exceeds it
if as.highThru() {
flying := atomic.LoadInt64(&as.flying)
as.avgFlyingLock.Lock()
avgFlying := as.avgFlying
as.avgFlyingLock.Unlock()
msg := fmt.Sprintf(
"dropreq, cpu: %d, maxPass: %d, minRt: %.2f, hot: %t, flying: %d, avgFlying: %.2f",
stat.CpuUsage(), as.maxPass(), as.minRt(), as.stillHot(), flying, avgFlying)
logx.Error(msg)
stat.Report(msg)
return true
}
}
return false
}
```
cpu threshold check systemOverloaded().
The cpu load value calculation algorithm uses the sliding average algorithm to prevent burr phenomenon. Sampling every 250ms β is 0.95, which is roughly equivalent to the average of 20 cpu loadings in history, with a time period of about 5s.
```go
// Is the cpu overloaded
func (as *adaptiveShedder) systemOverloaded() bool {
return systemOverloadChecker(as.cpuThreshold)
}
// cpu check function
systemOverloadChecker = func(cpuThreshold int64) bool {
return stat.CpuUsage() >= cpuThreshold
}
// cpu sliding average
curUsage := internal.RefreshCpu()
prevUsage := atomic.LoadInt64(&cpuUsage)
// cpu = cpuᵗ⁻¹ * beta + cpuᵗ * (1 - beta)
// Sliding average algorithm
usage := int64(float64(prevUsage)*beta + float64(curUsage)*(1-beta))
atomic.StoreInt64(&cpuUsage, usage)
```
Check if it is stillHot:
Determine whether the current system is in the cooling period, if in the cooling period, you should continue to try to check whether to discard the request. The main purpose is to prevent the system in the process of overload recovery before the load has come down, and immediately increase the pressure again resulting in back and forth jitter, at this time should try to continue to discard the request.
```go
func (as *adaptiveShedder) stillHot() bool {
// No recent requests have been discarded
// means the service is working
if !as.droppedRecently.True() {
return false
}
// Not in cooling period
dropTime := as.dropTime.Load()
if dropTime == 0 {
return false
}
// Cooling time default is 1s
hot := timex.Since(dropTime) < coolOffDuration
// Not in cooling-off period, normal processing of requests in progress
if !hot {
// Reset drop records
as.droppedRecently.Set(false)
}
return hot
}
```
Check the number of concurrency currently being processed highThru().
Once the number of concurrency currently being processed > the concurrency carrying limit, then it enters the down load state.
Why do we need to add a lock here? Because adaptive downgrading is used globally to ensure that the concurrency average is correct.
Why do we need to add spin locks here? Because concurrency processing can be performed without blocking other goroutines, and lock-free concurrency can be used to improve performance.
```go
func (as *adaptiveShedder) highThru() bool {
// Add lock
as.avgFlyingLock.Lock()
// Get the sliding average
// Update at the end of each request
avgFlying := as.avgFlying
// Unlock
as.avgFlyingLock.Unlock()
// Maximum concurrency of the system at this time
maxFlight := as.maxFlight()
// Whether the number of concurrency being processed and the average concurrency is greater than the system's maximum concurrency
return int64(avgFlying) > maxFlight && atomic.LoadInt64(&as.flying) > maxFlight
}
```
How can we get the number of concurrency being processed and the average number of concurrency?
The current concurrency count is actually very simple: +1 concurrency for each allowed request, -1 for the promise object callback after the request is completed, and the average concurrency can be solved using the sliding average algorithm.
```go
type promise struct {
// Request start time
// Statistics on request processing time
start time.Duration
shedder *adaptiveShedder
}
func (p *promise) Fail() {
// End of request, number of requests currently being processed - 1
p.shedder.addFlying(-1)
}
func (p *promise) Pass() {
// Response time in milliseconds
rt := float64(timex.Since(p.start)) / float64(time.Millisecond)
// End of request, number of requests currently being processed - 1
p.shedder.addFlying(-1)
p.shedder.rtCounter.Add(math.Ceil(rt))
p.shedder.passCounter.Add(1)
}
func (as *adaptiveShedder) addFlying(delta int64) {
flying := atomic.AddInt64(&as.flying, delta)
// When the request is finished, count the concurrency of requests currently being processed
if delta < 0 {
as.avgFlyingLock.Lock()
// Estimate the average number of requests for the current service over a recent period of time
as.avgFlying = as.avgFlying*flyingBeta + float64(flying)*(1-flyingBeta)
as.avgFlyingLock.Unlock()
}
}
```
It is not enough to get the current system count, we also need to know the maximum number of concurrent requests that the system can handle, i.e., the maximum number of concurrent requests.
The number of requests passed and the response time are both achieved by a sliding window, which can be found in the article on adaptive fusers.
The maximum concurrency of the current system = the maximum number of passes per unit time of the window * the minimum response time per unit time of the window.
```go
// Calculate the maximum number of concurrency of the system per second
// Maximum concurrency = maximum requests (qps) * minimum response time (rt)
func (as *adaptiveShedder) maxFlight() int64 {
// windows = buckets per second
// maxQPS = maxPASS * windows
// minRT = min average response time in milliseconds
// maxQPS * minRT / milliseconds_per_second
// as.maxPass() * as.windows - maximum qps per bucket * number of buckets contained in 1s
// as.minRt()/1e3 - the smallest average response time of all buckets in the window / 1000ms here to convert to seconds
return int64(math.Max(1, float64(as.maxPass()*as.windows)*(as.minRt()/1e3)))
}
// Sliding time window with multiple buckets
// Find the one with the highest number of requests
// Each bucket takes up internal ms
// qps refers to the number of requests in 1s, qps: maxPass * time.Second/internal
func (as *adaptiveShedder) maxPass() int64 {
var result float64 = 1
// The bucket with the highest number of requests in the current time window
as.passCounter.Reduce(func(b *collection.Bucket) {
if b.Sum > result {
result = b.Sum
}
})
return int64(result)
}
// Sliding time window with multiple buckets
// Calculate the minimum average response time
// because it is necessary to calculate the maximum number of concurrency that the system can handle in a recent period of time
func (as *adaptiveShedder) minRt() float64 {
// Default is 1000ms
result := defaultMinRt
as.rtCounter.Reduce(func(b *collection.Bucket) {
if b.Count <= 0 {
return
}
// Average response time for requests
avg := math.Round(b.Sum / float64(b.Count))
if avg < result {
result = avg
}
})
return result
}
```
### Reference
[Google BBR Congestion Control Algorithm](https://queue.acm.org/detail.cfm?id=3022184)
[Principle of sliding average algorithm](https://www.cnblogs.com/wuliytTaotao/p/9479958.html)
[go-zero adaptive load shedding](https://go-zero.dev/cn/loadshedding.html)

View File

@ -0,0 +1,162 @@
---
sidebar_position: 10
---
# Metric
### Monitoring Access
The `go-zero` framework integrates service metrics monitoring based on `prometheus`. However, it is not explicitly turned on and needs to be configured in `config.yaml` by the developer as follows.
```go
Prometheus:
Host: 127.0.0.1
Port: 9091
Path: /metrics
```
If the developer is building `Prometheus` locally, the configuration file `prometheus.yaml` in `Prometheus` needs to write the configuration that needs to collect the service monitoring information.
```go
- job_name: 'file_ds'
static_configs:
- targets: ['your-local-ip:9091']
labels:
job: activeuser
app: activeuser-api
env: dev
instance: your-local-ip:service-port
```
Because it is run locally with `docker`. Place `prometheus.yaml` in the `docker-prometheus` directory.
```shell
docker run \
-p 9090:9090 \
-v dockeryml/docker-prometheus:/etc/prometheus \
prom/prometheus
```
Open `localhost:9090` and you can see.
![prometheus](/img/prometheus.png)
By clicking on `http://service-ip:9091/metrics` you can see the monitoring information for this service.
![prometheus data](/img/prometheus-data.png)
Above we can see that there are two kinds of `bucket`, and `count/sum` metrics.
How does `go-zero` integrate monitoring metrics? What metrics are being monitored? How do we define our own metrics? Here's an explanation of these questions
:::tip
For basic access to the above, see our other article: https://zeromicro.github.io/go-zero/service-monitor.html
:::
### How to integrate
The request method in the above example is `HTTP`, which means that the monitoring metrics data is continuously collected when requesting the server side. It is easy to think of the middleware function, the specific code.
```go title="https://github.com/tal-tech/go-zero/blob/master/rest/handler/prometheushandler.go"
var (
metricServerReqDur = metric.NewHistogramVec(&metric.HistogramVecOpts{
...
// Monitoring Indicators
Labels: []string{"path"},
// Histogram distribution in which the buckets of statistics
Buckets: []float64{5, 10, 25, 50, 100, 250, 500, 1000},
})
metricServerReqCodeTotal = metric.NewCounterVec(&metric.CounterVecOpts{
...
// Monitor indicators: directly in the record indicator incr() can be
Labels: []string{"path", "code"},
})
)
func PromethousHandler(path string) func(http.Handler) http.Handler {
return func(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Time of request for access
startTime := timex.Now()
cw := &security.WithCodeResponseWriter{Writer: w}
defer func() {
// Time of request return
metricServerReqDur.Observe(int64(timex.Since(startTime)/time.Millisecond), path)
metricServerReqCodeTotal.Inc(path, strconv.Itoa(cw.Code))
}()
// Middleware release, after executing subsequent middleware and business logic. Rejoin here and do a metric upload of the complete request
// [🧅: The Onion Model]
next.ServeHTTP(cw, r)
})
}
}
```
The whole thing is actually quite simple.
- HistogramVec is responsible for request time collection.
- The bucket holds the time consumption indicator specified by the option. A request will be aggregated and counted by the corresponding bucket.
- The final display is the distribution of a route in different time consumption, very intuitive to provide developers can optimize the area.
- CounterVec is responsible for specifying labels to collect.
- Labels: []string{"path", "code"}
- labels is equivalent to a tuple. go-zero is a record of the number of times different status codes are returned for different routes, using (path, code) as a whole. If there are too many 4xx,5xx, shouldn't you look at the health of your service?
### How to customize
The `go-zero` also provides the `prometheus metric` basic wrapper for developers to develop their own prometheus middleware.
:::tip
codehttps://github.com/tal-tech/go-zero/tree/master/core/metric
:::
| Name | Usage | Search Functions |
|----------------|-----------------|----------------------------------------|
| CounterVec | A single count. Usage: QPS statistics | CounterVec.Inc() Indicator+1 |
| GuageVec | Single metric record. Used for disk capacity, CPU/Mem usage (can be increased or decreased) | GuageVec.Inc()/GuageVec.Add() Indicator +1/Indicator plus N, can also be a negative number |
| HistogramVec | The distribution of the response values. Apply to: request elapsed time, response size | HistogramVec.Observe(val, labels) Record the current corresponding value of the indicator and find the bucket where the value is located, +1 |
Also for `HistogramVec.Observe()` Do a basic analysis
We can actually see that each HistogramVec statistic in the above chart has 3 sequences that appear.
- _count: number of data
- _sum: sum of all data
- _bucket{le=a1}: the number of data in [-inf, a1]
So we also guess that during the counting process, 3 types of data are counted.
```go
// Basically the statistics in prometheus are counted using the atomic CAS method
// performance is higher than using Mutex
func (h *histogram) observe(v float64, bucket int) {
n := atomic.AddUint64(&h.countAndHotIdx, 1)
hotCounts := h.counts[n>>63]
if bucket < len(h.upperBounds) {
// val Corresponding data bucket +1
atomic.AddUint64(&hotCounts.buckets[bucket], 1)
}
for {
oldBits := atomic.LoadUint64(&hotCounts.sumBits)
newBits := math.Float64bits(math.Float64frombits(oldBits) + v)
// sum indicator value +v (after all, it is the total sum)
if atomic.CompareAndSwapUint64(&hotCounts.sumBits, oldBits, newBits) {
break
}
}
// count Statistics +1
atomic.AddUint64(&hotCounts.count, 1)
}
```
So developers want to define their own monitoring metrics: the
- Specify the middleware to be generated in the goctl generation API code: https://zeromicro.github.io/go-zero/middleware.html
- Write your own metrics logic in the middleware file
- Of course, the developer can also write the metrics logic for the statistics in the business logic. Same as above.
The above is all for the HTTP part of the logic parsing, the RPC part of the logic is similar, you can see the design in the interceptor section.

View File

@ -0,0 +1,111 @@
---
sidebar_position: 1
---
# rest
### Overview
From daily development experience, a good web framework needs to meet the following features in general.
* route matching/multi-route support
* support for custom middleware
* complete decoupling of framework and business development to facilitate rapid development
* parameter validation / matching
* monitoring / logging / metrics and other service self-checking features
* Service self-protection
### rest overview
rest has the following characteristics:
* initialize resources with `context` (different from `context` of `gin`) → save them in `serviveCtx` and share them in `handler` (as for resource pooling, leave it to the resources themselves, `serviveCtx` is just the entry point and sharing point)
* independent router declaration file, and add the concept of router group, convenient for developers to organize the code structure
* Built-in middleware: monitoring/fusing/forensics, etc.
* Use goctl codegen + option design pattern, convenient for developers to control part of the middleware access
The following diagram depicts the pattern and most of the processing paths for rest to handle requests.
* The framework's built-in middleware already helps developers to solve most of the self-processing logic of the service
* Also go-zero gives developers out-of-the-box components at the business logic (dq, fx, etc.)
* from the development model to help developers only need to focus on their business logic and the required resources to prepare
![rest](/img/rest.png)
### Startup process
The following diagram depicts the modules and the general flow of the overall server startup. Prepare to analyze the rest implementation according to the following flow.
* Based on http.server encapsulation and modification: separating engine (web framework core) and option
* radix-tree construction for multi-route matching
* middleware using the onion model → []Middleware
* http parse parsing and match-checking → httpx.Parse()
* Metrics (createMetrics()) and monitoring buried sites (prometheus) are collected during the request process
![rest_start](/img/rest_start.png)
#### server engine
The engine is used throughout the server life cycle.
* router will carry a developer-defined path/handler that will be executed at the end of router.handle()
* Registered custom middleware + framework middleware, executed before the router handler logic
Here: go-zero processing granularity is on route, wrapping and processing is performed at route level
![server_engine](/img/server_engine.jpeg)
### Routing Matching
So when the request arrives, how does it get to the routing layer in the first place?
First of all, in the development of the most primitive http server, there is a piece of code like this.
![basic_server](/img/basic_server.png)
`http.ListenAndServe()` Internally it will execute to`server.ListenAndServe()`
Let's see how this works in the rest.
![rest_route](/img/rest_route.png)
The handler passed in is actually the router generated by router.NewRouter(), which carries the entire set of handler functions for the server.
At the same time, the http.Server structure is initialized with the handler injected into it.
![rest_route](/img/rest_handle.png)
After the http.Server receives the req, the final execution is also`handler.ServeHTTP(rw, req)`
![rest_route](/img/servehttp.png)
So the built-in `router` Also need to achieve `ServeHTTP` . As for `router` How to achieve it yourself `ServeHTTP` :It's just a matter of finding a matching route and then executing the route's corresponding `handle logic`.
### Parameter analysis
Parsing arguments is a basic capability that the http framework needs to provide. In the code generated by goctl code gen, the req argument parse function is already integrated in the handler layer.
![rest_route](/img/rest_parse.png)
Go to `httpx.Parse()` , The main analysis of the following pieces
```go title="https://github.com/zeromicro/go-zero/blob/master/rest/httpx/requests.go#L32:6"
```
* Parsing path
* Parsing form forms
* Parsing http header
* parsing json
:::info
The function of the parameter checks in Parse() is described in:
The tag modifier in https://go-zero.dev/cn/api-grammar.html
:::
### Usage examples
[Usage examples](https://github.com/zeromicro/zero-examples/tree/main/http)

View File

@ -0,0 +1,5 @@
---
sidebar_position: 9
---
# Tracing

View File

@ -0,0 +1,26 @@
---
sidebar_position: 2
---
# About Us
## go-zero
go-zero is an integrated web and rpc framework for a variety of engineering practices. The elastic design guarantees the stability of the large concurrent server and has been fully tested in the field.
go-zero includes a minimal API definition and generation tool goctl, according to the definition of the api file to generate Go, iOS, Android, Kotlin, Dart, TypeScript, JavaScript code in one click, and can be run directly.
## go-zero Author
[<img src="/img/kevin.jpeg" width="200px" height="200px" alt="kevwan"/>](https://github.com/kevwan)
*kevwan*
He has 14 years of experience in R&D team management, 16 years of experience in architecture design, 20 years of experience in engineering, and has been responsible for the architecture design of many large projects.
He is also a lecturer at Tencent Cloud Developer Conference.
## go-zero Community
We currently have more than 7000 community members, where you can discuss any technology about go-zero with everyone, feedback on issues, get the latest go-zero information, and technical insights shared by all the big guys every day.
## go-zero Community Groups
<img src="https://raw.githubusercontent.com/tal-tech/zero-doc/main/doc/images/wechat.jpg" width="300" alt="Community Groups"/>

View File

@ -0,0 +1,187 @@
---
sidebar_position: 1
---
# Introduction
go-zero is an integrated web and rpc framework for a variety of engineering practices. The elastic design guarantees the stability of the large concurrent server and has been fully tested in the field.
go-zero includes the minimal API definition and generation tool goctl, which can generate Go, iOS, Android, Kotlin, Dart, TypeScript, JavaScript code based on the defined api file with one click and can be run directly.
Benefits of using go-zero:
- :white_check_mark: Easy to get the stability to support 10 million daily service.
- :white_check_mark: Built-in cascade timeout control, current limiting, adaptive fusing, adaptive load shedding and other microservice governance capabilities without configuration and additional code.
- :white_check_mark: Microservice governance middleware can be seamlessly integrated with other existing frameworks.
- :white_check_mark: Minimal API description, one-click code generation for each end.
- :white_check_mark: Automatic verification of the legitimacy of client request parameters.
- :white_check_mark: Extensive microservice governance and concurrency toolkit.
<img src="https://gitee.com/kevwan/static/raw/master/doc/images/architecture.png" alt="架构图" width="1500" />
## go-zero framework background
In early 18, we decided to migrate from a `Java+MongoDB` monolithic architecture to a microservices architecture, and after careful thought and comparison, we decided that:
* Go-based language
* Efficient performance
* Simple syntax
* Extensively proven engineering efficiency
* Ultimate deployment experience
* Extremely low server-side resource costs
* Self-developed microservices framework
* A lot of experience in self-researching microservices frameworks
* Need to have faster problem location
* Easier to add new features
## go-zero framework design thinking
For the design of the microservice framework, we expect to guarantee the stability of microservices while paying special attention to R&D efficiency. So at the beginning of the design, we have some guidelines as follows.
* Keep it simple, the first principle
* resilient design, fault-oriented programming
* Tools over conventions and documentation
* High availability
* Highly concurrent
* Easy to scale
* Business development friendly, encapsulating complexity
* Constraints do one thing only one way
In less than half a year, we completely completed the migration from `Java+MongoDB` to `Golang+MySQL` as the main microservice system, and it was fully online at the end of August 18, which has guaranteed the subsequent rapid growth of business and ensured the high availability of the whole service.
## go-zero project implementation and features
go-zero is an integrated web and rpc framework with various engineering practices, with the following key features.
* powerful tool support, as little code as possible to write
* minimalist interface
* fully compatible with net/http
* middleware support for easy extensions
* high performance
* Fault-oriented programming, resilient design
* Built-in service discovery, load balancing
* Built-in flow limiting, meltdown, load shedding, and auto-trigger, auto-recovery
* Automatic API parameter validation
* Timeout cascade control
* Automatic cache control
* Link tracking, statistical alarms, etc.
* High concurrency support, stable to ensure the daily traffic flood during the epidemic
In the figure below, we guarantee high availability of the overall service on several levels.
[Flexible design](https://gitee.com/kevwan/static/raw/master/doc/images/resilience.jpg)
If you think it's good, don't forget to **star** 👏
## Quick Start
#### For the full example, please see
[Quick Build Highly Concurrent Microservices](https://github.com/tal-tech/zero-doc/blob/main/doc/shorturl.md)
[Quick Build Highly Concurrent Microservices - Multi RPC Edition](https://github.com/tal-tech/zero-doc/blob/main/docs/zero/bookstore.md)
#### Install the `goctl` tool
`goctl` is pronounced as `go control`, not as `go C-T-L`. `goctl` means not to be controlled by the code, but to control it. Where `go` does not mean `golang`. When I designed `goctl`, I wanted to use `her
to free our hands 👈
```shell
GO111MODULE=on GOPROXY=https://goproxy.cn/,direct go get -u github.com/tal-tech/go-zero/tools/goctl
```
If you are using go1.16, you can install it with the `go install` command
```shell
GOPROXY=https://goproxy.cn/,direct go install github.com/tal-tech/go-zero/tools/goctl@latest
```
Ensure that `goctl` is executable
#### Quick Generate api Service
```shell
goctl api new greet
cd greet
go mod init
go mod tidy
go run greet.go -f etc/greet-api.yaml
```
The default listener is on port `8888` (which can be changed in the configuration file) and can be requested via curl at
```shell
curl -i http://localhost:8888/from/you
```
Returns the following:
```http
HTTP/1.1 200 OK
Content-Type: application/json
Date: Thu, 22 Oct 2020 14:03:18 GMT
Content-Length: 14
{"message":""}
```
Writing business code.
* The api file defines the routes that the service exposes to the public
* You can pass dependencies to the logic in servicecontext.go, such as mysql, redis, etc.
* Add business processing logic to the logic corresponding to the get/post/put/delete requests defined in the api
#### can generate Java, TypeScript, Dart, JavaScript code needed for front-end based on api file
```shell
goctl api java -api greet.api -dir greet
goctl api dart -api greet.api -dir greet
...
```
## Benchmark
![benchmark](https://gitee.com/kevwan/static/raw/master/doc/images/benchmark.png)
[The test code is available here](https://github.com/smallnest/go-web-framework-benchmark)
* awesome series (more articles in 『microservices practice』public)
* [Quickly Building Highly Concurrent Microservices](https://github.com/tal-tech/zero-doc/blob/main/doc/shorturl.md)
* [Quickly Building Highly Concurrent Microservices - Multi RPC Edition](https://github.com/tal-tech/zero-doc/blob/main/docs/zero/bookstore.md)
* Featured `goctl` plugin
<table>
<tr>
<td>Plugin </td> <td>Application </td>
</tr>
<tr>
<td><a href="https://github.com/zeromicro/goctl-swagger">goctl-swagger</a></td> <td>One Click Generation <code>api</code> of <code>swagger</code> Documentation </td>
</tr>
<tr>
<td><a href="https://github.com/zeromicro/goctl-android">goctl-android</a></td> <td> Generation <code>java (android)</code> End <code>http client</code> Request codes</td>
</tr>
<tr>
<td><a href="https://github.com/zeromicro/goctl-go-compact">goctl-go-compact</a> </td> <td>Merge <code>api</code> the same <code>group</code> Inside the <code>handler</code> to a go file</td>
</tr>
</table>
## WeChat public number
`go-zero` related articles will be presented in `microservices practice` public number, welcome to scan the code to pay attention to, also can through the public number private message me 👏
<img src="https://zeromicro.github.io/go-zero-pages/resource/go-zero-practise.png" alt="wechat" width="300" />
## WeChat Exchange Group
If there are any queries that are not covered in the documentation, you are welcome to ask in the group and we will reply as soon as possible.
You can suggest any improvement in use in the group, and we will consider the reasonableness and modify it as soon as possible.
If you find ***bug*** please mention ***issue*** in time, we will confirm and modify as soon as possible.
In order to prevent advertising users, identify technical peers, please ***star*** after adding me specify **github** current ***star*** number, I then pull into the **go-zero** group, thanks!
Before adding me, please click ***star***, a small ***star*** is the motivation for the authors to answer a lot of questions 🤝
<img src="https://raw.githubusercontent.com/tal-tech/zero-doc/main/doc/images/wechat.jpg" alt="wechat" width="300" />

View File

@ -0,0 +1,69 @@
---
sidebar_position: 3
---
# Join Us
## Summary
<img src="/img/go-zero.png" alt="go-zero" width="100px" height="100px" align="right" />
[go-zero](https://github.com/zeromicro/go-zero) is an open source project based on [MIT License](https://github.com/zeromicro/go-zero/blob/master/LICENSE), everyone In the use of the discovery of bugs, new features, etc., can participate in the go-zero contribution, we very welcome your active participation, but also the fastest response to all kinds of questions, pr, etc.
## Contribution
* [Pull Request](https://github.com/zeromicro/go-zero/pulls)
* [Issue](https://github.com/zeromicro/go-zero/issues)
:::tip contributions
The code in go-zero's Pull request needs to meet certain specifications
* English comments should be the main focus
* pr with good functional features and clear, concise descriptions
* Increase unit test coverage to 80%+.
:::
## Contribute code (pr)
* Go to the [go-zero](https://github.com/zeromicro/go-zero) project and fork a copy of the [go-zero](https://github.com/zeromicro/go-zero) project to your own github repository.
* Go back to your own github home page and find the `xx/go-zero` project, where xx is your username, e.g. `anqiansong/go-zero`
![fork](/img/fork.png)
* Cloning code to local
![clone](/img/clone.png)
* Develop the code and push it to your own github repository
* Go to your own go-zero project in github, click `[Pull requests]` on the floating layer to enter the Compare page.
![pr](/img/new_pr.png)
* Click `[Create pull request]` to realize pr request
* To check if the pr is submitted successfully, go to [Pull requests](https://github.com/zeromicro/go-zero/pulls) of [go-zero](https://github.com/zeromicro/go-zero) and you should have your own submission The name of the branch is the name of your development
![pr record](/img/pr_record.png)
## Issue
In our community, there are many partners will actively feedback some go-zero use process problems encountered, due to the number of community, although we will pay attention to the community dynamics in real time, but everyone's problem feedback is random, when our team is still solving a partner's problem, another problem also feedback up, may lead to the team will be easy to ignore, in order to be able to In order to solve everyone's problems one by one, we strongly recommend that you give feedback through the issue, including but not limited to bugs, expected new features, etc. We will also reflect in the issue when implementing a new feature, you can also get the latest trends of go-zero here, and welcome you to actively participate in the discussion.
### How to mention Issue
* Click [here](https://github.com/zeromicro/go-zero/issues) to go to go-zero's Issue page or go directly to [https://github.com/zeromicro/go-zero/issues](https://) github.com/zeromicro/go-zero/issues) address
* Click `[New issue]` in the upper right corner to create a new issue
* Fill in the issue title and content
* Click `[Submit new issue]` to submit the issue
## Document Contribution
The documentation repository [`go-zero.dev`](https://github.com/zeromicro/go-zero.dev), built with [docusaurus](https://docusaurus.io), will automatically trigger Github Actions when the documentation changes are merged into the master branch for automatic deployment.
### Adding/modifying documentation
First fork the docs repository and clone your own repository to local, then add and modify docs in the corresponding subdirectory in the docs directory, the doc format is Markdown and supports some extended syntax, please refer to [Docusaurus: Markdown Features](https://) for the supported syntax docusaurus.io/docs/markdown-features)
### Submit pr
After adding/modifying a document, you can submit the pr and wait for the team to merge the document.
## Reference documentation
* [Github Pull request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/proposing-changes-to-your-work-with-pull-requests)

View File

@ -0,0 +1,159 @@
---
sidebar_position: 1
---
# Go Queue
Delayed queue: a message queue with a delay function
- deferred → an indefinite time in the future
- mq → consumption behavior is sequential
With this explanation, the whole design is clear. Your purpose is delay, and the bearer container is mq.
### Background
To list scenarios that may exist in my daily business.
- Creating a delayed schedule and needing to remind the teacher of class
- Delayed push → pushing the teacher needs announcements and assignments
To solve the above problem, the easiest and most direct way is to go to the schedule sweeper regularly.
:::info
When the service starts, open an asynchronous concurrent process → scan the msg table at regular intervals, trigger an event when it arrives, and call the corresponding handler
:::
Several disadvantages.
- Every service that needs a timed/delayed task needs a msg table for additional storage → storage is coupled with the business
- Timed scanning → bad timing control, may miss trigger time
- It is a burden on the msg table instance. Repeatedly, one service keeps putting constant pressure on the database
What is the biggest problem?
Scheduling model is basically unified, don't do duplicate business logic
We can consider taking the logic out of the specific business logic and turning it into a common part.
And this scheduling model, is the delay queue .
In fact, to put it plainly.
The delayed queue model is to store future execution events in advance, and then continuously scan this store to execute the corresponding task logic if the execution time is triggered.
So is there a ready-made solution in the open source world? The answer is yes. Beanstalk (https://github.com/beanstalkd/beanstalkd) basically meets these requirements
### Design purpose
- Consumer behavior at least
- High availability
- Real-time
- Support for message deletion
The design directions for these purposes are stated one at a time.
#### consumption behavior
This concept is taken from mq. mq provides several directions for consuming casts.
- at most once → at most once, messages may be lost, but not repeated
- at least once → at least once, the message is definitely not lost, but may be repeated
- exactly once → once and only once, messages are not lost and not repeated, and are consumed only once.
exactly once is guaranteed on both the producer + consumer side if possible. When the producer has no way to guarantee this, the consumer needs to do a de-duplication before consumption, so that the message is consumed once and not repeatedly, which is guaranteed directly inside the delay queue.
The simplest: use redis setNX to achieve unique consumption of job ids
#### High Availability
Multi-instance deployment is supported. When one instance goes down, there is a backup instance that continues to provide services.
This externally available API uses a cluster model, where multiple nodes are encapsulated internally and redundant storage is available across multiple nodes.
#### Why not use Kafka?
After considering similar solutions based on message queues such as kafka/rocketmq as storage, the storage design model abandoned this option.
For example, suppose we use a message queue storage like Kafka to implement the delay function, each queue time needs to create a separate topic (e.g. Q1-1s, Q1-2s...) . This design is not a big problem in scenarios where the delay time is fixed, but if the delay time varies greatly, the number of topics will be too many, which will turn the disk from sequential reads and writes into random reads and writes, leading to performance degradation and other problems like restart or long recovery time.
- Too many topics → storage pressure
- The topic stores the real time, and the reads at different times (topic) are sequential reads → random reads when scheduling.
- Similarly, when writing, sequential write → random write
### Architecture Design
![dq](/img/dq.png)
### API Design
producer
- producer.At(msg []byte, at time.Time)
- producer.Delay(body []byte, delay time.Duration)
- producer.Revoke(ids string)
consumer
- consumer.Consume(consume handler)
After using the delayed queue, the overall structure of the service is as follows, as well as the state changes of the jobs in the queue.
![delay queue](/img/delay-queue.png)
- service → producer.At(msg []byte, at time.Time) → insert delayed job into the tube
- Timed trigger → job state is updated to ready
- consumer gets ready job → fetches job and starts consuming; and changes state to reserved
- Execute the handler logic function passed into the consumer
### Production Practice
It mainly describes what specific features we use for delayed queues in our daily development.
#### Production side
- To produce a delayed task in development, just determine the task execution time
- Pass in At() producer.At(msg []byte, at time.Time)
- The time difference is calculated internally by itself and inserted into tube
- If there are changes to the task time, and changes to the task content
- In production time, it may be necessary to create an additional relationship table of logic_id → job_id
- Query to job_id → producer.Revoke(ids string), delete it, and reinsert it
#### consumer side
First, the framework level to ensure that the consumption behavior of exactly once, but the upper business logic consumption failure or network problems, or a variety of problems, resulting in consumption failure, the bottom to the business development to do. Reasons for doing so.
- framework and the underlying components only to ensure the correctness of the flow of job state
- Framework consumer side only to ensure the uniformity of consumption behavior
- delayed tasks in different business behavior is not uniform
- Emphasis on the mandatory nature of the task, the consumption of failure requires continuous retry until the task success
- Emphasis on the punctuality of the task, then consumption failure, business-insensitive, can choose to discard
Here is a description of how the consumer side of the framework to ensure the uniformity of consumption behavior.
There are cluster and node. cluster.
`https://github.com/tal-tech/go-queue/blob/master/dq/consumer.go#L45`
- The cluster internally repackages the consume handler with a layer
- hash the consume body and use this hash as the key for redis de-duplication
- If it exists, it is not processed and is discarded
#### node
`https://github.com/tal-tech/go-queue/blob/master/dq/consumernode.go#L36`
- consume node to get ready job; first execute Reserve(TTR), book this job, will execute this job for logical processing
- delete(job) in the node; then consume
- If it fails, it will be thrown up to the business layer to do the corresponding under the hood retry
So for the consumption side, developers need to implement the idempotency of consumption themselves.
![idempotent](/img/idempotent.png)
### Usage examples
[usage example](https://github.com/zeromicro/go-queue/tree/master/example)

View File

@ -0,0 +1,372 @@
---
sidebar_position: 2
---
# MapReduce
### Why MapReduce is needed
In practical business scenarios we often need to get the corresponding properties from different rpc services to assemble complex objects.
For example, to query product details.
- Product Service - Query Product Attributes
- Inventory service - query inventory properties
- Price service - query price attributes
- Marketing service - query marketing attributes
If it is a serial call, the response time will increase linearly with the number of rpc calls, so we will generally change serial to parallel to optimize performance.
The simple scenario of using waitGroup can also meet the needs, but what if we need to check the data returned by the rpc call, process the data to convert, and aggregate the data? The go-zero authors have implemented an in-process data batching mapReduce concurrency tool class based on the mapReduce architecture.
### Design Ideas
Let's try to put ourselves into the role of the author to sort out the possible business scenarios of concurrent tools.
- Query commodity details: support concurrent calls to multiple services to combine product attributes, and support call errors can be ended immediately.
- Automatic recommendation of user card coupons on product details page: support concurrently checking card coupons, check failure automatically rejects and returns all card coupons.
The above is actually processing the input data and finally outputting the cleaned data. There is a very classic asynchronous pattern for data processing: the producer-consumer pattern. So we can abstract the life cycle of data batch processing, which can be roughly divided into three phases.
![three stage](/img/three-stage.png)
- Data production generate
- data processing mapper
- data aggregation reducer
Data production is an indispensable stage, data processing and data aggregation are optional stages, data production and processing support concurrent calls, data aggregation is basically a pure memory operation single concurrent process can be.
Since different stages of data processing are performed by different goroutines, it is natural to consider using channel to achieve communication between goroutines.
![flow](/img/flow.png)
How can I terminate the process at any time?
It's very simple, just listen to a global end channel in the goroutine.
### go-zero code implementation
`core/mr/mapreduce.go`
### Pre-requisite Knowledge - Channel Basic Usage
Since the MapReduce source code makes extensive use of channels for communication, a general reference to basic channel usage is as follows.
Remember to close the channel after writing
```go
ch := make(chan interface{})
// You need to actively close the channel after writing
defer func() {
close(ch)
}()
go func() {
// v,ok mode Read channel
for {
v, ok := <-ch
if !ok {
return
}
t.Log(v)
}
for i := range ch {
t.Log(i)
}
for range ch {
}
}()
for i := 0; i < 10; i++ {
ch <- i
time.Sleep(time.Second)
}
````
Closed channels still support reads
Restricted channel read and write permissions
```go
func readChan(rch <-chan interface{}) {
for i := range rch {
log.Println(i)
}
}
func writeChan(wch chan<- interface{}) {
wch <- 1
}
```
### Interface definitions
Let's start with the three most core function definitions.
- Data production
- Data processing
- Data aggregation
```go
GenerateFunc func(source chan<- interface{})
MapperFunc func(item interface{}, writer Writer, cancel func(error))
ReducerFunc func(pipe <-chan interface{}, writer Writer, cancel func(error))
````
### User-oriented method definition
The use of methods can be viewed in the official documentation, here not to repeat
There are more user-oriented methods, and the methods are divided into two main categories.
- No return
- The execution process terminates immediately when an error occurs
- The execution process does not focus on errors
- With return value
- Manually write to source, manually read aggregated data channel
- Write manually to source, read aggregated data automatically channel
- External incoming source, read aggregated data automatically channel
```go
func Finish(fns ...func() error) error
func FinishVoid(fns ...func())
func Map(generate GenerateFunc, mapper MapFunc, opts ...Option)
func MapVoid(generate GenerateFunc, mapper VoidMapFunc, opts ...Option)
func MapReduceVoid(generate GenerateFunc, mapper MapperFunc, reducer VoidReducerFunc, opts ...Option)
func MapReduce(generate GenerateFunc, mapper MapperFunc, reducer ReducerFunc, opts ...Option) (interface{}, error)
func MapReduceWithSource(source <-chan interface{}, mapper MapperFunc, reducer ReducerFunc,
opts ...Option) (interface{}, error)
```
The core methods are MapReduceWithSource and Map, and all other methods call them internally. Once you figure out the MapReduceWithSource method, it's not a big deal to call Map.
### MapReduceWithSource source code implementation
It's all in this diagram
![mapreduce](/img/mapreduce.png)
```go
func MapReduceWithSource(source <-chan interface{}, mapper MapperFunc, reducer ReducerFunc,
opts ...Option) (interface{}, error) {
options := buildOptions(opts...)
output := make(chan interface{})
defer func() {
for range output {
panic("more than one element written in reducer")
}
}()
collector := make(chan interface{}, options.workers)
done := syncx.NewDoneChan()
writer := newGuardedWriter(output, done.Done())
var closeOnce sync.Once
var retErr errorx.AtomicError
finish := func() {
closeOnce.Do(func() {
done.Close()
close(output)
})
}
cancel := once(func(err error) {
if err != nil {
retErr.Set(err)
} else {
retErr.Set(ErrCancelWithNil)
}
drain(source)
finish()
})
go func() {
defer func() {
drain(collector)
if r := recover(); r != nil {
cancel(fmt.Errorf("%v", r))
} else {
finish()
}
}()
reducer(collector, writer, cancel)
}()
go executeMappers(func(item interface{}, w Writer) {
mapper(item, w, cancel)
}, source, collector, done.Done(), options.workers)
value, ok := <-output
if err := retErr.Load(); err != nil {
return nil, err
} else if ok {
return value, nil
} else {
return nil, ErrReduceNoOutput
}
}
````
```go
func executeMappers(mapper MapFunc, input <-chan interface{}, collector chan<- interface{},
done <-chan lang.PlaceholderType, workers int) {
var wg sync.WaitGroup
defer func() {
wg.Wait()
close(collector)
}()
pool := make(chan lang.PlaceholderType, workers)
writer := newGuardedWriter(collector, done)
for {
select {
case <-done:
return
case pool <- lang.Placeholder:
item, ok := <-input
if !ok {
<-pool
return
}
wg.Add(1)
threading.GoSafe(func() {
defer func() {
wg.Done()
<-pool
}()
mapper(item, writer)
})
}
}
}
```
### Usage examples
```go
package main
import (
"log"
"time"
"github.com/tal-tech/go-zero/core/mr"
"github.com/tal-tech/go-zero/core/timex"
)
type user struct{}
func (u *user) User(uid int64) (interface{}, error) {
time.Sleep(time.Millisecond * 30)
return nil, nil
}
type store struct{}
func (s *store) Store(pid int64) (interface{}, error) {
time.Sleep(time.Millisecond * 50)
return nil, nil
}
type order struct{}
func (o *order) Order(pid int64) (interface{}, error) {
time.Sleep(time.Millisecond * 40)
return nil, nil
}
var (
userRpc user
storeRpc store
orderRpc order
)
func main() {
start := timex.Now()
_, err := productDetail(123, 345)
if err != nil {
log.Printf("product detail error: %v", err)
return
}
log.Printf("productDetail time: %v", timex.Since(start))
// the data processing
res, err := checkLegal([]int64{1, 2, 3})
if err != nil {
log.Printf("check error: %v", err)
return
}
log.Printf("check res: %v", res)
}
type ProductDetail struct {
User interface{}
Store interface{}
Order interface{}
}
func productDetail(uid, pid int64) (*ProductDetail, error) {
var pd ProductDetail
err := mr.Finish(func() (err error) {
pd.User, err = userRpc.User(uid)
return
}, func() (err error) {
pd.Store, err = storeRpc.Store(pid)
return
}, func() (err error) {
pd.Order, err = orderRpc.Order(pid)
return
})
if err != nil {
return nil, err
}
return &pd, nil
}
func checkLegal(uids []int64) ([]int64, error) {
r, err := mr.MapReduce(func(source chan<- interface{}) {
for _, uid := range uids {
source <- uid
}
}, func(item interface{}, writer mr.Writer, cancel func(error)) {
uid := item.(int64)
ok, err := check(uid)
if err != nil {
cancel(err)
}
if ok {
writer.Write(uid)
}
}, func(pipe <-chan interface{}, writer mr.Writer, cancel func(error)) {
var uids []int64
for p := range pipe {
uids = append(uids, p.(int64))
}
writer.Write(uids)
})
if err != nil {
return nil, err
}
return r.([]int64), nil
}
func check(uid int64) (bool, error) {
// do something check user legal
time.Sleep(time.Millisecond * 20)
return true, nil
}
```
[More examples](https://github.com/zeromicro/zero-examples/tree/main/mapreduce)

View File

@ -0,0 +1,5 @@
---
sidebar_position: 2
---
# Community Problems

View File

@ -0,0 +1,5 @@
---
sidebar_position: 1
---
# Usage Problems

View File

@ -0,0 +1,93 @@
---
sidebar_position: 3
---
# Build API
### Create greet service
```shell
$ cd ~/go-zero-demo
$ go mod init go-zero-demo
$ goctl api new greet
Done.
```
Take a look at the structure of the `greet` service
```shell
$ cd greet
$ tree
```
```text
.
├── etc
│   └── greet-api.yaml
├── greet.api
├── greet.go
└── internal
├── config
│   └── config.go
├── handler
│   ├── greethandler.go
│   └── routes.go
├── logic
│   └── greetlogic.go
├── svc
│   └── servicecontext.go
└── types
└── types.go
```
As you can see from the above directory structure, the `greet` service is small, but it has all the "guts". Next we can write the business code in `greetlogic.go`.
### Writing logic
```shell
$ vim ~/go-zero-demo/greet/internal/logic/greetlogic.go
```
```go
func (l *GreetLogic) Greet(req types.Request) (*types.Response, error) {
return &types.Response{
Message: "Hello go-zero",
}, nil
}
```
### Start and access the service
* Start-up services
```shell
$ cd ~/go-zero-demo/greet
$ go run greet.go -f etc/greet-api.yaml
```
```text
Starting server at 0.0.0.0:8888...
```
* Access services
```shell
$ curl -i -X GET \
http://localhost:8888/from/you
```
```text
HTTP/1.1 200 OK
Content-Type: application/json
Date: Sun, 07 Feb 2021 04:31:25 GMT
Content-Length: 27
{"message":"Hello go-zero"}
```
### Source Code
[greet source code](https://github.com/zeromicro/go-zero-demo/tree/master/greet)

View File

@ -0,0 +1,166 @@
---
sidebar_position: 4
---
# Build RPC
### Create user rpc service
* Create user rpc service
```shell
$ cd ~/go-zero-demo/mall
$ mkdir -p user/rpc && cd user/rpc
```
* Add `user.proto` file, add `getUser` method
```shell
$ vim ~/go-zero-demo/mall/user/rpc/user.proto
```
```protobuf
syntax = "proto3";
package user;
//protoc-gen-go version greater than 1.4.0, proto file need to add go_package, otherwise it can not be generated
option go_package = "./user";
message IdRequest {
string id = 1;
}
message UserResponse {
// 用户id
string id = 1;
// 用户名称
string name = 2;
// 用户性别
string gender = 3;
}
service User {
rpc getUser(IdRequest) returns(UserResponse);
}
```
* Generate code
```shell
$ cd ~/go-zero-demo/mall/user/rpc
$ goctl rpc template -o user.proto
$ goctl rpc proto -src user.proto -dir .
[goclt version <=1.2.1] protoc -I=/Users/xx/mall/user user.proto --goctl_out=plugins=grpc:/Users/xx/mall/user/user
[goctl version > 1.2.1] protoc -I=/Users/xx/mall/user user.proto --go_out=plugins=grpc:/Users/xx/mall/user/user
Done.
```
:::tip protoc-gen-go version
If the installed version of `protoc-gen-go` is greater than 1.4.0, it is recommended to add `go_package` to the proto file
:::
* Populate business logic
```shell
$ vim internal/logic/getuserlogic.go
```
```go
package logic
import (
"context"
"go-zero-demo/mall/user/rpc/internal/svc"
"go-zero-demo/mall/user/rpc/user"
"github.com/tal-tech/go-zero/core/logx"
)
type GetUserLogic struct {
ctx context.Context
svcCtx *svc.ServiceContext
logx.Logger
}
func NewGetUserLogic(ctx context.Context, svcCtx *svc.ServiceContext) *GetUserLogic {
return &GetUserLogic{
ctx: ctx,
svcCtx: svcCtx,
Logger: logx.WithContext(ctx),
}
}
func (l *GetUserLogic) GetUser(in *user.IdRequest) (*user.UserResponse, error) {
return &user.UserResponse{
Id: "1",
Name: "test",
}, nil
}
```
* Modify the configuration
```shell
$ vim internal/config/config.go
```
```go
package config
import (
"github.com/tal-tech/go-zero/zrpc"
)
type Config struct {
zrpc.RpcServerConf
}
```
* Add yaml configuration
```shell
$ vim etc/user.yaml
```
```yaml
Name: user.rpc
ListenOn: 127.0.0.1:8080
Etcd:
Hosts:
- 127.0.0.1:2379
Key: user.rpc
```
* Modify the directory file
```shell
$ cd ~/go-zero-demo/mall/rpc
$ mkdir userclient && mv /user/user.go /userclient
```
### Start the service and verify
:::tip etcd installation
[Click here for etcd installation tutorial](https://etcd.io/docs/v3.5/install/)
:::
* Start etcd
```shell
$ etcd
```
* Start user rpc
```shell
$ go run user.go -f etc/user.yaml
```
```text
Starting rpc server at 127.0.0.1:8080...
```

View File

@ -0,0 +1,9 @@
---
sidebar_position: 2
---
# Build Tool
goctl is pronounced go control, not go C-T-L. goctl means not to be controlled by the code, but to control it. The go does not refer to golang. I designed goctl with the hope that she would free our hands 👈
### [see goctl for details](../build-tool/tool-intro.md)

View File

@ -0,0 +1,44 @@
---
sidebar_position: 1
---
# Concept
### go-zero
A variety of engineering practices in one web and rpc framework.
### goctl
An aid designed to improve engineering efficiency and reduce error rates for developers.
### goctl plugin
Refers to goctl-centric peripheral binary resources that can meet some personalized code generation needs, such as the route merge plugin `goctl-go-compact` plugin, the
The `goctl-swagger` plugin for generating swagger documents, the `goctl-php` plugin for generating php call-side, etc.
### intellij/vscode plugin
A plugin developed with goctl on the intellij product line, which replaces goctl command line operations with UI.
### api file
The api file is a text file used to define and describe the api service, which ends with the .api suffix and contains the api syntax description content.
### goctl environment
The goctl environment is the preparation environment before using goctl and contains:
* golang environment
* protoc
* protoc-gen-go plugin
* go module | gopath
### go-zero-demo
go-zero-demo contains a large repository of all the source code in the documentation, and we create subprojects under this project when we write the demo.
So we need to create a big repository `go-zero-demo` in advance, I put this repository here in the home directory.
```shell
$ cd ~
$ mkdir go-zero-demo&&cd go-zero-demo
$ go mod init go-zero-demo
```

View File

@ -0,0 +1,7 @@
---
title: Markdown page example
---
# Markdown page example
You don't need React to write simple standalone pages.

View File

@ -0,0 +1,42 @@
{
"link.title.Docs": {
"message": "Docs",
"description": "The title of the footer links column with title=Docs in the footer"
},
"link.title.Community": {
"message": "Community",
"description": "The title of the footer links column with title=Community in the footer"
},
"link.title.More": {
"message": "More",
"description": "The title of the footer links column with title=More in the footer"
},
"link.item.label.Tutorial": {
"message": "Tutorial",
"description": "The label of footer link with label=Tutorial linking to /docs/intro"
},
"link.item.label.Stack Overflow": {
"message": "Stack Overflow",
"description": "The label of footer link with label=Stack Overflow linking to https://stackoverflow.com/questions/tagged/docusaurus"
},
"link.item.label.Discord": {
"message": "Discord",
"description": "The label of footer link with label=Discord linking to https://discordapp.com/invite/docusaurus"
},
"link.item.label.Twitter": {
"message": "Twitter",
"description": "The label of footer link with label=Twitter linking to https://twitter.com/docusaurus"
},
"link.item.label.Blog": {
"message": "Blog",
"description": "The label of footer link with label=Blog linking to /blog"
},
"link.item.label.GitHub": {
"message": "GitHub",
"description": "The label of footer link with label=GitHub linking to https://github.com/facebook/docusaurus"
},
"copyright": {
"message": "Copyright © 2022 go-zero.dev, Inc. Built with Docusaurus.",
"description": "The footer copyright"
}
}

View File

@ -0,0 +1,18 @@
{
"title": {
"message": "Go-zero",
"description": "The title in the navbar"
},
"item.label.文档": {
"message": "Docs",
"description": "Navbar item with label 文档"
},
"item.label.博客": {
"message": "Blog",
"description": "Navbar item with label 博客"
},
"item.label.GitHub": {
"message": "GitHub",
"description": "Navbar item with label GitHub"
}
}

37
website/package.json Normal file
View File

@ -0,0 +1,37 @@
{
"name": "tmp",
"version": "0.0.0",
"private": true,
"scripts": {
"docusaurus": "docusaurus",
"start": "docusaurus start",
"build": "docusaurus build",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
"clear": "docusaurus clear",
"serve": "docusaurus serve",
"write-translations": "docusaurus write-translations",
"write-heading-ids": "docusaurus write-heading-ids"
},
"dependencies": {
"@docusaurus/core": "2.0.0-beta.14",
"@docusaurus/preset-classic": "2.0.0-beta.14",
"@mdx-js/react": "^1.6.21",
"clsx": "^1.1.1",
"prism-react-renderer": "^1.2.1",
"react": "^17.0.1",
"react-dom": "^17.0.1"
},
"browserslist": {
"production": [
">0.5%",
"not dead",
"not op_mini all"
],
"development": [
"last 1 chrome version",
"last 1 firefox version",
"last 1 safari version"
]
}
}

31
website/sidebars.js Normal file
View File

@ -0,0 +1,31 @@
/**
* Creating a sidebar enables you to:
- create an ordered group of docs
- render a sidebar for each doc of that group
- provide next/previous navigation
The sidebars can be generated from the filesystem, or explicitly defined here.
Create as many sidebars as you want.
*/
// @ts-check
/** @type {import('@docusaurus/plugin-content-docs').SidebarsConfig} */
const sidebars = {
// By default, Docusaurus generates a sidebar from the docs folder structure
tutorialSidebar: [{type: 'autogenerated', dirName: '.'}],
// But you can create a sidebar manually
/*
tutorialSidebar: [
{
type: 'category',
label: 'Tutorial',
items: ['hello'],
},
],
*/
};
module.exports = sidebars;

View File

@ -0,0 +1,89 @@
import React from 'react';
import clsx from 'clsx';
import styles from './HomepageFeatures.module.css';
import Translate, { translate } from '@docusaurus/Translate';
const FeatureList = [
{
title: <><Translate>稳定性</Translate></>,
Svg: require('../../static/img/stabilize.svg').default,
description: (
<>
<Translate>轻松获得支撑千万日活服务的稳定性</Translate>
</>
),
},
{
title: <><Translate>服务治理</Translate></>,
Svg: require('../../static/img/govern.svg').default,
description: (
<>
<Translate>内建级联超时控制限流自适应熔断自适应降载等微服务治理能力无需配置和额外代码</Translate>
</>
),
},
{
title: <><Translate>可插拔</Translate></>,
Svg: require('../../static/img/move.svg').default,
description: (
<>
<Translate>微服务治理中间件可无缝集成到其它现有框架使用</Translate>
</>
),
},
{
title: <><Translate>代码自动生成</Translate></>,
Svg: require('../../static/img/code-gen.svg').default,
description: (
<>
<Translate>极简的 API 描述一键生成各端代码</Translate>
</>
),
},
{
title: <><Translate>效验请求合法性</Translate></>,
Svg: require('../../static/img/validate.svg').default,
description: (
<>
<Translate>自动校验客户端请求参数合法性</Translate>
</>
),
},
{
title: <><Translate>工具包</Translate></>,
Svg: require('../../static/img/tool.svg').default,
description: (
<>
<Translate>大量微服务治理和并发工具包</Translate>
</>
),
},
];
function Feature({Svg, title, description}) {
return (
<div className={clsx('col col--4')}>
<div className="text--center">
<Svg className={styles.featureSvg} alt={title} />
</div>
<div className="text--center padding-horiz--md">
<h3>{title}</h3>
<p>{description}</p>
</div>
</div>
);
}
export default function HomepageFeatures() {
return (
<section className={styles.features}>
<div className="container">
<div className="row">
{FeatureList.map((props, idx) => (
<Feature key={idx} {...props} />
))}
</div>
</div>
</section>
);
}

View File

@ -0,0 +1,11 @@
.features {
display: flex;
align-items: center;
padding: 2rem 0;
width: 100%;
}
.featureSvg {
height: 200px;
width: 200px;
}

View File

@ -0,0 +1,28 @@
/**
* Any CSS included here will be global. The classic template
* bundles Infima by default. Infima is a CSS framework designed to
* work well for content-centric websites.
*/
/* You can override the default Infima variables here. */
:root {
--ifm-color-primary: #25c2a0;
--ifm-color-primary-dark: rgb(33, 175, 144);
--ifm-color-primary-darker: rgb(31, 165, 136);
--ifm-color-primary-darkest: rgb(26, 136, 112);
--ifm-color-primary-light: rgb(70, 203, 174);
--ifm-color-primary-lighter: rgb(102, 212, 189);
--ifm-color-primary-lightest: rgb(146, 224, 208);
--ifm-code-font-size: 95%;
}
.docusaurus-highlight-code-line {
background-color: rgba(0, 0, 0, 0.1);
display: block;
margin: 0 calc(-1 * var(--ifm-pre-padding));
padding: 0 var(--ifm-pre-padding);
}
html[data-theme='dark'] .docusaurus-highlight-code-line {
background-color: rgba(0, 0, 0, 0.3);
}

View File

@ -0,0 +1,52 @@
import React from 'react';
import clsx from 'clsx';
import Translate from '@docusaurus/Translate';
import Layout from '@theme/Layout';
import Link from '@docusaurus/Link';
import useDocusaurusContext from '@docusaurus/useDocusaurusContext';
import styles from './index.module.css';
import HomepageFeatures from '../components/HomepageFeatures';
function HomepageHeader() {
const {siteConfig} = useDocusaurusContext();
return (
<header className={clsx('hero hero--primary', styles.heroBanner)}>
<div className="container">
<h1 className="hero__title">{siteConfig.title}</h1>
<p className="hero__subtitle"><Translate>{siteConfig.tagline}</Translate></p>
<div>
<span className={styles.indexCtasGitHubButtonWrapper}>
<iframe
className={styles.indexCtasGitHubButton}
src="https://ghbtns.com/github-btn.html?user=zeromicro&amp;repo=go-zero&amp;type=star&amp;count=true&amp;size=large"
width={160}
height={30}
title="GitHub Stars"
/>
</span>
</div>
<div className={styles.buttons}>
<Link
className="button button--secondary button--lg"
to="/docs/quick-start/concept">
<Translate>开始体验吧</Translate>
</Link>
</div>
</div>
</header>
);
}
export default function Home() {
const {siteConfig} = useDocusaurusContext();
return (
<Layout
title={`Hello from ${siteConfig.title}`}
description="Description will go into a meta tag in <head />">
<HomepageHeader />
<main>
<HomepageFeatures />
</main>
</Layout>
);
}

View File

@ -0,0 +1,23 @@
/**
* CSS files with the .module.css suffix will be treated as CSS modules
* and scoped locally.
*/
.heroBanner {
padding: 4rem 0;
text-align: center;
position: relative;
overflow: hidden;
}
@media screen and (max-width: 966px) {
.heroBanner {
padding: 2rem;
}
}
.buttons {
display: flex;
align-items: center;
justify-content: center;
}

View File

@ -0,0 +1,7 @@
---
title: Markdown page example
---
# Markdown page example
You don't need React to write simple standalone pages.

0
website/static/.nojekyll Normal file
View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 122 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 75 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 442 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 410 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 251 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 164 KiB

Some files were not shown because too many files have changed in this diff Show More