1
0
mirror of https://github.com/nginxinc/nginx-prometheus-exporter.git synced 2025-07-31 21:04:21 +03:00

Release 0.1.0

This commit is contained in:
Michael Pleshakov
2018-05-30 14:49:53 +01:00
commit 8d90a86cdf
448 changed files with 95398 additions and 0 deletions

9
.gitignore vendored Normal file
View File

@ -0,0 +1,9 @@
# NGINX Plus license files
*.crt
*.key
# Visual Studio Code settings
.vscode
# the binary
nginx-prometheus-exporter

5
CHANGELOG.md Normal file
View File

@ -0,0 +1,5 @@
# Changelog
## 0.1.0
* Initial release.

13
Dockerfile Normal file
View File

@ -0,0 +1,13 @@
FROM golang:1.10 as builder
ARG VERSION
ARG GIT_COMMIT
WORKDIR /go/src/github.com/nginxinc/nginx-prometheus-exporter
COPY *.go ./
COPY vendor ./vendor
COPY collector ./collector
COPY client ./client
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags "-X main.version=${VERSION} -X main.gitCommit=${GIT_COMMIT}" -o exporter .
FROM alpine:latest
COPY --from=builder /go/src/github.com/nginxinc/nginx-prometheus-exporter/exporter /usr/bin/
ENTRYPOINT [ "/usr/bin/exporter" ]

201
LICENSE Normal file
View File

@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "{}"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 Nginx, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

13
Makefile Normal file
View File

@ -0,0 +1,13 @@
VERSION = 0.1.0
PREFIX = nginx-prometheus-exporter
TAG = $(VERSION)
GIT_COMMIT = $(shell git rev-parse --short HEAD)
test:
go test ./...
container:
docker build --build-arg VERSION=$(VERSION) --build-arg GIT_COMMIT=$(GIT_COMMIT) -t $(PREFIX):$(TAG) .
push: container
docker push $(PREFIX):$(TAG)

103
README.md Normal file
View File

@ -0,0 +1,103 @@
# NGINX Prometheus Exporter
NGINX Prometheus exporter makes it possible to monitor NGINX or NGINX Plus using Prometheus.
## Overview
[NGINX](http://nginx.org) exposes a handful of metrics via the [stub_status page](http://nginx.org/en/docs/http/ngx_http_stub_status_module.html#stub_status). [NGINX Plus](https://www.nginx.com/products/nginx/) provides a richer set of metrics via the [API](https://nginx.org/en/docs/http/ngx_http_api_module.html) and the [monitoring dashboard](https://www.nginx.com/products/nginx/live-activity-monitoring/). NGINX Prometheus exporter fetches the metrics from a single NGINX or NGINX Plus, converts the metrics into appropriate Prometheus metrics types and finally exposes them via an HTTP server to be collected by [Prometheus](https://prometheus.io/).
**Note**: Currently it is only supported to run NGINX Prometheus Exporter in a Docker container.
## Getting Started
In this section, we show how to quickly run NGINX Prometheus Exporter in a container for NGINX or NGINX Plus.
### A Note about NGINX Ingress Controller
If youd like to use the NGINX Prometheus Exporter with [NGINX Ingress Controller](https://github.com/nginxinc/kubernetes-ingress/) for Kubernetes, see [this doc](https://github.com/nginxinc/kubernetes-ingress/blob/master/docs/installation.md#5-access-the-live-activity-monitoring-dashboard) for the installation instructions.
### Prerequisites
We assume that you have already installed Prometheus and NGINX or NGINX Plus. Additionally, you need to:
* Expose the built-in metrics in NGINX/NGINX Plus:
* For NGINX, expose the [stub_status page](http://nginx.org/en/docs/http/ngx_http_stub_status_module.html#stub_status) at `/stub_status` on port `8080`.
* For NGINX Plus, expose the [API](https://nginx.org/en/docs/http/ngx_http_api_module.html#api) at `/api` on port `8080`.
* Install Docker on the server where youre planning to run the exporter.
* Configure Prometheus to scrape metrics from the server with the exporter. Note that the default scrape port of the exporter is `9113` and the default metrics path -- `/metrics`.
### Running the Exporter in a Container
To start the exporter we use the [docker run](https://docs.docker.com/engine/reference/run/) command.
* To export NGINX metrics, run:
```
$ docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.1.0 -nginx.scrape-uri http://<nginx>:8080/api
```
where `<nginx>` is the IP address/DNS name, through which NGINX is available.
* To export NGINX Plus metrics, run:
```
$ docker run -p 9113:9113 nginx/nginx-prometheus-exporter:0.1.0 -nginx.plus -nginx.scrape-uri http://<nginx-plus>:8080/api
```
where `<nginx-plus>` is the IP address/DNS name, through which NGINX Plus is available.
## Usage
### Command-line Arguments
```
Usage of ./nginx-prometheus-exporter:
-nginx.plus
Start the exporter for NGINX Plus. By default, the exporter is started for NGINX.
-nginx.scrape-uri string
A URI for scraping NGINX or NGINX Plus metrics.
For NGINX, the stub_status page must be available through the URI. For NGINX Plus -- the API. (default "http://127.0.0.1:8080/stub_status")
-web.listen-address string
An address to listen on for web interface and telemetry. (default ":9113")
-web.telemetry-path string
A path under which to expose metrics. (default "/metrics")
```
### Exported Metrics
* For NGINX, all stub_status metrics are exported. Connect to the `/metrics` page of the running exporter to see the complete list of metrics along with their descriptions.
* For NGINX Plus, the following metrics are exported:
* [Connections](http://nginx.org/en/docs/http/ngx_http_api_module.html#def_nginx_connections).
* [HTTP](http://nginx.org/en/docs/http/ngx_http_api_module.html#http_).
* [SSL](http://nginx.org/en/docs/http/ngx_http_api_module.html#def_nginx_ssl_object).
* [HTTP Server Zones](http://nginx.org/en/docs/http/ngx_http_api_module.html#def_nginx_http_server_zone).
* [HTTP Upsteams](http://nginx.org/en/docs/http/ngx_http_api_module.html#def_nginx_http_upstream). Note: for the `state` metric, the string values are converted to float64 using the following rule: `"up"` -> `1.0`, `"draining"` -> `2.0`, `"down"` -> `3.0`, `"unavail"` > `4.0`, `"checking"` > `5.0`, `"unhealthy"` -> `6.0`.
Connect to the `/metrics` page of the running exporter to see the complete list of metrics along with their descriptions. Note: to see server zones related metrics you must configure [status zones](https://nginx.org/en/docs/http/ngx_http_status_module.html#status_zone) and to see upstream related metrics you must configure upstreams with a [shared memory zone](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#zone).
### Troubleshooting
The exporter logs errors to the standard output. If the exporter doesnt work as expected, check its logs using [docker logs](https://docs.docker.com/engine/reference/commandline/logs/) command.
## Releases
For each release, we publish the corresponding Docker image at `nginx/nginx-prometheus-exporter` [DockerHub repo](https://hub.docker.com/r/nginx/nginx-prometheus-exporter/). See the GitHub [releases page](https://github.com/nginxinc/nginx-prometheus-exporter/releases) for the list of all releases and the [CHANGELOG.md](CHANGELOG.md) for the changelog.
## Building the Container Image
You can build the exporter image using the provided Makefile. Before building the image, make sure the following software is installed on your machine:
* make
* Docker
* git
To build the image, run:
```
$ make container
```
Note:
* golang is not required, as the exporter binary is built in a Docker container. See the [Dockerfile](Dockerfile).
* The Makefile supports a few variables which can override its default behaviour. See [its content](Makefile).
## Support
The commercial support is available for NGINX Plus customers when the NGINX Prometheus Exporter is used with NGINX Ingress Controller.
## License
[Apache License, Version 2.0](LICENSE).

139
client/nginx.go Normal file
View File

@ -0,0 +1,139 @@
package client
import (
"fmt"
"io/ioutil"
"net/http"
"strconv"
"strings"
)
// NginxClient allows you to fetch NGINX metrics from the stub_status page.
type NginxClient struct {
apiEndpoint string
httpClient *http.Client
}
// StubStats represents NGINX stub_status metrics.
type StubStats struct {
Connections StubConnections
Requests int64
}
// StubConnections represents connections related metrics.
type StubConnections struct {
Active int64
Accepted int64
Handled int64
Reading int64
Writing int64
Waiting int64
}
// NewNginxClient creates an NginxClient.
func NewNginxClient(httpClient *http.Client, apiEndpoint string) (*NginxClient, error) {
client := &NginxClient{
apiEndpoint: apiEndpoint,
httpClient: httpClient,
}
if _, err := client.GetStubStats(); err != nil {
return nil, fmt.Errorf("Failed to create NginxClient: %v", err)
}
return client, nil
}
// GetStubStats fetches the stub_status metrics.
func (client *NginxClient) GetStubStats() (*StubStats, error) {
resp, err := client.httpClient.Get(client.apiEndpoint)
if err != nil {
return nil, fmt.Errorf("failed to get %v: %v", client.apiEndpoint, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("expected %v response, got %v", http.StatusOK, resp.StatusCode)
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("failed to read the response body: %v", err)
}
var stats StubStats
err = parseStubStats(body, &stats)
if err != nil {
return nil, fmt.Errorf("failed to parse response body %q: %v", string(body), err)
}
return &stats, nil
}
func parseStubStats(data []byte, stats *StubStats) error {
dataStr := string(data)
parts := strings.Split(dataStr, "\n")
if len(parts) != 5 {
return fmt.Errorf("invalid input %q", dataStr)
}
activeConsParts := strings.Split(strings.TrimSpace(parts[0]), " ")
if len(activeConsParts) != 3 {
return fmt.Errorf("invalid input for active connections %q", parts[0])
}
actCons, err := strconv.ParseInt(activeConsParts[2], 10, 64)
if err != nil {
return fmt.Errorf("invalid input for active connections %q: %v", activeConsParts[2], err)
}
stats.Connections.Active = actCons
miscParts := strings.Split(strings.TrimSpace(parts[2]), " ")
if len(miscParts) != 3 {
return fmt.Errorf("invalid input for connections and requests %q", parts[2])
}
acceptedCons, err := strconv.ParseInt(miscParts[0], 10, 64)
if err != nil {
return fmt.Errorf("invalid input for accepted connections %q: %v", miscParts[0], err)
}
stats.Connections.Accepted = acceptedCons
handledCons, err := strconv.ParseInt(miscParts[1], 10, 64)
if err != nil {
return fmt.Errorf("invalid input for handled connections %q: %v", miscParts[1], err)
}
stats.Connections.Handled = handledCons
requests, err := strconv.ParseInt(miscParts[2], 10, 64)
if err != nil {
return fmt.Errorf("invalid input for requests %q: %v", miscParts[2], err)
}
stats.Requests = requests
consParts := strings.Split(strings.TrimSpace(parts[3]), " ")
if len(consParts) != 6 {
return fmt.Errorf("invalid input for connections %q", parts[3])
}
readingCons, err := strconv.ParseInt(consParts[1], 10, 64)
if err != nil {
return fmt.Errorf("invalid input for reading connections %q: %v", consParts[1], err)
}
stats.Connections.Reading = readingCons
writingCons, err := strconv.ParseInt(consParts[3], 10, 64)
if err != nil {
return fmt.Errorf("invalid input for writing connections %q: %v", consParts[3], err)
}
stats.Connections.Writing = writingCons
waitingCons, err := strconv.ParseInt(consParts[5], 10, 64)
if err != nil {
return fmt.Errorf("invalid input for waiting connections %q: %v", consParts[5], err)
}
stats.Connections.Waiting = waitingCons
return nil
}

657
client/nginx_plus.go Normal file
View File

@ -0,0 +1,657 @@
package client
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
)
// Client version - 380a939
// APIVersion is a version of NGINX Plus API.
const APIVersion = 2
// NginxPlusClient lets you add/remove servers to/from NGINX Plus via its API.
type NginxPlusClient struct {
apiEndpoint string
httpClient *http.Client
}
type versions []int
// UpstreamServer lets you configure HTTP upstreams.
type UpstreamServer struct {
ID int `json:"id,omitempty"`
Server string `json:"server"`
MaxFails int `json:"max_fails"`
FailTimeout string `json:"fail_timeout,omitempty"`
SlowStart string `json:"slow_start,omitempty"`
}
// StreamUpstreamServer lets you configure Stream upstreams.
type StreamUpstreamServer struct {
ID int64 `json:"id,omitempty"`
Server string `json:"server"`
MaxFails int64 `json:"max_fails"`
FailTimeout string `json:"fail_timeout,omitempty"`
SlowStart string `json:"slow_start,omitempty"`
}
type apiErrorResponse struct {
Path string
Method string
Error apiError
RequestID string `json:"request_id"`
Href string
}
func (resp *apiErrorResponse) toString() string {
return fmt.Sprintf("path=%v; method=%v; error.status=%v; error.text=%v; error.code=%v; request_id=%v; href=%v",
resp.Path, resp.Method, resp.Error.Status, resp.Error.Text, resp.Error.Code, resp.RequestID, resp.Href)
}
type apiError struct {
Status int
Text string
Code string
}
// Stats represents NGINX Plus stats fetched from the NGINX Plus API.
// https://nginx.org/en/docs/http/ngx_http_api_module.html
type Stats struct {
Connections Connections
HTTPRequests HTTPRequests
SSL SSL
ServerZones ServerZones
Upstreams Upstreams
}
// Connections represents connection related stats.
type Connections struct {
Accepted uint64
Dropped uint64
Active uint64
Idle uint64
}
// HTTPRequests represents HTTP request related stats.
type HTTPRequests struct {
Total uint64
Current uint64
}
// SSL represents SSL related stats.
type SSL struct {
Handshakes uint64
HandshakesFailed uint64 `json:"handshakes_failed"`
SessionReuses uint64 `json:"session_reuses"`
}
// ServerZones is map of server zone stats by zone name
type ServerZones map[string]ServerZone
// ServerZone represents server zone related stats.
type ServerZone struct {
Processing uint64
Requests uint64
Responses Responses
Discarded uint64
Received uint64
Sent uint64
}
// Responses represents HTTP reponse related stats.
type Responses struct {
Responses1xx uint64 `json:"1xx"`
Responses2xx uint64 `json:"2xx"`
Responses3xx uint64 `json:"3xx"`
Responses4xx uint64 `json:"4xx"`
Responses5xx uint64 `json:"5xx"`
}
// Upstreams is a map of upstream stats by upstream name.
type Upstreams map[string]Upstream
// Upstream represents upstream related stats.
type Upstream struct {
Peers []Peer
Keepalives int
Zombies int
Zone string
Queue Queue
}
// Queue represents queue related stats for an upstream.
type Queue struct {
Size int
MaxSize int `json:"max_size"`
Overflows uint64
}
// Peer represents peer (upstream server) related stats.
type Peer struct {
ID int
Server string
Service string
Name string
Backup bool
Weight int
State string
Active uint64
MaxConns int `json:"max_conns"`
Requests uint64
Responses Responses
Sent uint64
Received uint64
Fails uint64
Unavail uint64
HealthChecks HealthChecks `json:"health_checks"`
Downtime uint64
Downstart string
Selected string
HeaderTime uint64 `json:"header_time"`
ResponseTime uint64 `json:"response_time"`
}
// HealthChecks represents health check related stats for a peer.
type HealthChecks struct {
Checks uint64
Fails uint64
Unhealthy uint64
LastPassed bool `json:"last_passed"`
}
// NewNginxPlusClient creates an NginxPlusClient.
func NewNginxPlusClient(httpClient *http.Client, apiEndpoint string) (*NginxPlusClient, error) {
versions, err := getAPIVersions(httpClient, apiEndpoint)
if err != nil {
return nil, fmt.Errorf("error accessing the API: %v", err)
}
found := false
for _, v := range *versions {
if v == APIVersion {
found = true
break
}
}
if !found {
return nil, fmt.Errorf("API version %v of the client is not supported by API versions of NGINX Plus: %v", APIVersion, *versions)
}
return &NginxPlusClient{
apiEndpoint: apiEndpoint,
httpClient: httpClient,
}, nil
}
func getAPIVersions(httpClient *http.Client, endpoint string) (*versions, error) {
resp, err := httpClient.Get(endpoint)
if err != nil {
return nil, fmt.Errorf("%v is not accessible: %v", endpoint, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf("%v is not accessible: expected %v response, got %v", endpoint, http.StatusOK, resp.StatusCode)
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, fmt.Errorf("error while reading body of the response: %v", err)
}
var vers versions
err = json.Unmarshal(body, &vers)
if err != nil {
return nil, fmt.Errorf("error unmarshalling versions, got %q response: %v", string(body), err)
}
return &vers, nil
}
func createResponseMismatchError(respBody io.ReadCloser, mainErr error) error {
apiErr, err := readAPIErrorResponse(respBody)
if err != nil {
return fmt.Errorf("%v; failed to read the response body: %v", mainErr, err)
}
return fmt.Errorf("%v; error: %v", mainErr, apiErr.toString())
}
func readAPIErrorResponse(respBody io.ReadCloser) (*apiErrorResponse, error) {
body, err := ioutil.ReadAll(respBody)
if err != nil {
return nil, fmt.Errorf("failed to read the response body: %v", err)
}
var apiErr apiErrorResponse
err = json.Unmarshal(body, &apiErr)
if err != nil {
return nil, fmt.Errorf("error unmarshalling apiErrorResponse: got %q response: %v", string(body), err)
}
return &apiErr, nil
}
// CheckIfUpstreamExists checks if the upstream exists in NGINX. If the upstream doesn't exist, it returns the error.
func (client *NginxPlusClient) CheckIfUpstreamExists(upstream string) error {
_, err := client.GetHTTPServers(upstream)
return err
}
// GetHTTPServers returns the servers of the upstream from NGINX.
func (client *NginxPlusClient) GetHTTPServers(upstream string) ([]UpstreamServer, error) {
path := fmt.Sprintf("http/upstreams/%v/servers", upstream)
var servers []UpstreamServer
err := client.get(path, &servers)
if err != nil {
return nil, fmt.Errorf("failed to get the HTTP servers of upstream %v: %v", upstream, err)
}
return servers, nil
}
// AddHTTPServer adds the server to the upstream.
func (client *NginxPlusClient) AddHTTPServer(upstream string, server UpstreamServer) error {
id, err := client.getIDOfHTTPServer(upstream, server.Server)
if err != nil {
return fmt.Errorf("failed to add %v server to %v upstream: %v", server.Server, upstream, err)
}
if id != -1 {
return fmt.Errorf("failed to add %v server to %v upstream: server already exists", server.Server, upstream)
}
path := fmt.Sprintf("http/upstreams/%v/servers/", upstream)
err = client.post(path, &server)
if err != nil {
return fmt.Errorf("failed to add %v server to %v upstream: %v", server.Server, upstream, err)
}
return nil
}
// DeleteHTTPServer the server from the upstream.
func (client *NginxPlusClient) DeleteHTTPServer(upstream string, server string) error {
id, err := client.getIDOfHTTPServer(upstream, server)
if err != nil {
return fmt.Errorf("failed to remove %v server from %v upstream: %v", server, upstream, err)
}
if id == -1 {
return fmt.Errorf("failed to remove %v server from %v upstream: server doesn't exist", server, upstream)
}
path := fmt.Sprintf("http/upstreams/%v/servers/%v", upstream, id)
err = client.delete(path)
if err != nil {
return fmt.Errorf("failed to remove %v server from %v upstream: %v", server, upstream, err)
}
return nil
}
// UpdateHTTPServers updates the servers of the upstream.
// Servers that are in the slice, but don't exist in NGINX will be added to NGINX.
// Servers that aren't in the slice, but exist in NGINX, will be removed from NGINX.
func (client *NginxPlusClient) UpdateHTTPServers(upstream string, servers []UpstreamServer) ([]UpstreamServer, []UpstreamServer, error) {
serversInNginx, err := client.GetHTTPServers(upstream)
if err != nil {
return nil, nil, fmt.Errorf("failed to update servers of %v upstream: %v", upstream, err)
}
toAdd, toDelete := determineUpdates(servers, serversInNginx)
for _, server := range toAdd {
err := client.AddHTTPServer(upstream, server)
if err != nil {
return nil, nil, fmt.Errorf("failed to update servers of %v upstream: %v", upstream, err)
}
}
for _, server := range toDelete {
err := client.DeleteHTTPServer(upstream, server.Server)
if err != nil {
return nil, nil, fmt.Errorf("failed to update servers of %v upstream: %v", upstream, err)
}
}
return toAdd, toDelete, nil
}
func determineUpdates(updatedServers []UpstreamServer, nginxServers []UpstreamServer) (toAdd []UpstreamServer, toRemove []UpstreamServer) {
for _, server := range updatedServers {
found := false
for _, serverNGX := range nginxServers {
if server.Server == serverNGX.Server {
found = true
break
}
}
if !found {
toAdd = append(toAdd, server)
}
}
for _, serverNGX := range nginxServers {
found := false
for _, server := range updatedServers {
if serverNGX.Server == server.Server {
found = true
break
}
}
if !found {
toRemove = append(toRemove, serverNGX)
}
}
return
}
func (client *NginxPlusClient) getIDOfHTTPServer(upstream string, name string) (int, error) {
servers, err := client.GetHTTPServers(upstream)
if err != nil {
return -1, fmt.Errorf("error getting id of server %v of upstream %v: %v", name, upstream, err)
}
for _, s := range servers {
if s.Server == name {
return s.ID, nil
}
}
return -1, nil
}
func (client *NginxPlusClient) get(path string, data interface{}) error {
url := fmt.Sprintf("%v/%v/%v", client.apiEndpoint, APIVersion, path)
resp, err := client.httpClient.Get(url)
if err != nil {
return fmt.Errorf("failed to get %v: %v", path, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
mainErr := fmt.Errorf("expected %v response, got %v", http.StatusOK, resp.StatusCode)
return createResponseMismatchError(resp.Body, mainErr)
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return fmt.Errorf("failed to read the response body: %v", err)
}
err = json.Unmarshal(body, data)
if err != nil {
return fmt.Errorf("error unmarshaling response %q: %v", string(body), err)
}
return nil
}
func (client *NginxPlusClient) post(path string, input interface{}) error {
url := fmt.Sprintf("%v/%v/%v", client.apiEndpoint, APIVersion, path)
jsonInput, err := json.Marshal(input)
if err != nil {
return fmt.Errorf("failed to marshall input: %v", err)
}
resp, err := client.httpClient.Post(url, "application/json", bytes.NewBuffer(jsonInput))
if err != nil {
return fmt.Errorf("failed to post %v: %v", path, err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusCreated {
mainErr := fmt.Errorf("expected %v response, got %v", http.StatusCreated, resp.StatusCode)
return createResponseMismatchError(resp.Body, mainErr)
}
return nil
}
func (client *NginxPlusClient) delete(path string) error {
path = fmt.Sprintf("%v/%v/%v/", client.apiEndpoint, APIVersion, path)
req, err := http.NewRequest(http.MethodDelete, path, nil)
if err != nil {
return fmt.Errorf("failed to create a delete request: %v", err)
}
resp, err := client.httpClient.Do(req)
if err != nil {
return fmt.Errorf("failed to create delete request: %v", err)
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
mainErr := fmt.Errorf("failed to complete delete request: expected %v response, got %v",
http.StatusOK, resp.StatusCode)
return createResponseMismatchError(resp.Body, mainErr)
}
return nil
}
// CheckIfStreamUpstreamExists checks if the stream upstream exists in NGINX. If the upstream doesn't exist, it returns the error.
func (client *NginxPlusClient) CheckIfStreamUpstreamExists(upstream string) error {
_, err := client.GetStreamServers(upstream)
return err
}
// GetStreamServers returns the stream servers of the upstream from NGINX.
func (client *NginxPlusClient) GetStreamServers(upstream string) ([]StreamUpstreamServer, error) {
path := fmt.Sprintf("stream/upstreams/%v/servers", upstream)
var servers []StreamUpstreamServer
err := client.get(path, &servers)
if err != nil {
return nil, fmt.Errorf("failed to get stream servers of upstream server %v: %v", upstream, err)
}
return servers, nil
}
// AddStreamServer adds the server to the upstream.
func (client *NginxPlusClient) AddStreamServer(upstream string, server StreamUpstreamServer) error {
id, err := client.getIDOfStreamServer(upstream, server.Server)
if err != nil {
return fmt.Errorf("failed to add %v stream server to %v upstream: %v", server.Server, upstream, err)
}
if id != -1 {
return fmt.Errorf("failed to add %v stream server to %v upstream: server already exists", server.Server, upstream)
}
path := fmt.Sprintf("stream/upstreams/%v/servers/", upstream)
err = client.post(path, &server)
if err != nil {
return fmt.Errorf("failed to add %v stream server to %v upstream: %v", server.Server, upstream, err)
}
return nil
}
// DeleteStreamServer the server from the upstream.
func (client *NginxPlusClient) DeleteStreamServer(upstream string, server string) error {
id, err := client.getIDOfStreamServer(upstream, server)
if err != nil {
return fmt.Errorf("failed to remove %v stream server from %v upstream: %v", server, upstream, err)
}
if id == -1 {
return fmt.Errorf("failed to remove %v stream server from %v upstream: server doesn't exist", server, upstream)
}
path := fmt.Sprintf("stream/upstreams/%v/servers/%v", upstream, id)
err = client.delete(path)
if err != nil {
return fmt.Errorf("failed to remove %v stream server from %v upstream: %v", server, upstream, err)
}
return nil
}
// UpdateStreamServers updates the servers of the upstream.
// Servers that are in the slice, but don't exist in NGINX will be added to NGINX.
// Servers that aren't in the slice, but exist in NGINX, will be removed from NGINX.
func (client *NginxPlusClient) UpdateStreamServers(upstream string, servers []StreamUpstreamServer) ([]StreamUpstreamServer, []StreamUpstreamServer, error) {
serversInNginx, err := client.GetStreamServers(upstream)
if err != nil {
return nil, nil, fmt.Errorf("failed to update stream servers of %v upstream: %v", upstream, err)
}
toAdd, toDelete := determineStreamUpdates(servers, serversInNginx)
for _, server := range toAdd {
err := client.AddStreamServer(upstream, server)
if err != nil {
return nil, nil, fmt.Errorf("failed to update stream servers of %v upstream: %v", upstream, err)
}
}
for _, server := range toDelete {
err := client.DeleteStreamServer(upstream, server.Server)
if err != nil {
return nil, nil, fmt.Errorf("failed to update stream servers of %v upstream: %v", upstream, err)
}
}
return toAdd, toDelete, nil
}
func (client *NginxPlusClient) getIDOfStreamServer(upstream string, name string) (int64, error) {
servers, err := client.GetStreamServers(upstream)
if err != nil {
return -1, fmt.Errorf("error getting id of stream server %v of upstream %v: %v", name, upstream, err)
}
for _, s := range servers {
if s.Server == name {
return s.ID, nil
}
}
return -1, nil
}
func determineStreamUpdates(updatedServers []StreamUpstreamServer, nginxServers []StreamUpstreamServer) (toAdd []StreamUpstreamServer, toRemove []StreamUpstreamServer) {
for _, server := range updatedServers {
found := false
for _, serverNGX := range nginxServers {
if server.Server == serverNGX.Server {
found = true
break
}
}
if !found {
toAdd = append(toAdd, server)
}
}
for _, serverNGX := range nginxServers {
found := false
for _, server := range updatedServers {
if serverNGX.Server == server.Server {
found = true
break
}
}
if !found {
toRemove = append(toRemove, serverNGX)
}
}
return
}
// GetStats gets connection, request, ssl, zone, and upstream related stats from the NGINX Plus API.
func (client *NginxPlusClient) GetStats() (*Stats, error) {
cons, err := client.getConnections()
if err != nil {
return nil, fmt.Errorf("failed to get stats: %v", err)
}
requests, err := client.getHTTPRequests()
if err != nil {
return nil, fmt.Errorf("Failed to get stats: %v", err)
}
ssl, err := client.getSSL()
if err != nil {
return nil, fmt.Errorf("failed to get stats: %v", err)
}
zones, err := client.getServerZones()
if err != nil {
return nil, fmt.Errorf("failed to get stats: %v", err)
}
upstreams, err := client.getUpstreams()
if err != nil {
return nil, fmt.Errorf("failed to get stats: %v", err)
}
return &Stats{
Connections: *cons,
HTTPRequests: *requests,
SSL: *ssl,
ServerZones: *zones,
Upstreams: *upstreams,
}, nil
}
func (client *NginxPlusClient) getConnections() (*Connections, error) {
var cons Connections
err := client.get("connections", &cons)
if err != nil {
return nil, fmt.Errorf("failed to get connections: %v", err)
}
return &cons, nil
}
func (client *NginxPlusClient) getHTTPRequests() (*HTTPRequests, error) {
var requests HTTPRequests
err := client.get("http/requests", &requests)
if err != nil {
return nil, fmt.Errorf("failed to get http requests: %v", err)
}
return &requests, nil
}
func (client *NginxPlusClient) getSSL() (*SSL, error) {
var ssl SSL
err := client.get("ssl", &ssl)
if err != nil {
return nil, fmt.Errorf("failed to get ssl: %v", err)
}
return &ssl, nil
}
func (client *NginxPlusClient) getServerZones() (*ServerZones, error) {
var zones ServerZones
err := client.get("http/server_zones", &zones)
if err != nil {
return nil, fmt.Errorf("failed to get server zones: %v", err)
}
return &zones, err
}
func (client *NginxPlusClient) getUpstreams() (*Upstreams, error) {
var upstreams Upstreams
err := client.get("http/upstreams", &upstreams)
if err != nil {
return nil, fmt.Errorf("failed to get upstreams: %v", err)
}
return &upstreams, nil
}

47
client/nginx_test.go Normal file
View File

@ -0,0 +1,47 @@
package client
import "testing"
const validStabStats = "Active connections: 1457 \nserver accepts handled requests\n 6717066 6717066 65844359 \nReading: 1 Writing: 8 Waiting: 1448 \n"
func TestParseStubStatsValidInput(t *testing.T) {
var tests = []struct {
input []byte
expectedResult StubStats
expectedError bool
}{
{
input: []byte(validStabStats),
expectedResult: StubStats{
Connections: StubConnections{
Active: 1457,
Accepted: 6717066,
Handled: 6717066,
Reading: 1,
Writing: 8,
Waiting: 1448,
},
Requests: 65844359,
},
expectedError: false,
},
{
input: []byte("invalid-stats"),
expectedError: true,
},
}
for _, test := range tests {
var result StubStats
err := parseStubStats(test.input, &result)
if err != nil && !test.expectedError {
t.Errorf("parseStubStats() returned error for valid input %q: %v", string(test.input), err)
}
if !test.expectedError && test.expectedResult != result {
t.Errorf("parseStubStats() result %v != expected %v for input %q", result, test.expectedResult, test.input)
}
}
}

7
collector/helper.go Normal file
View File

@ -0,0 +1,7 @@
package collector
import "github.com/prometheus/client_golang/prometheus"
func newGlobalMetric(namespace string, metricName string, docString string) *prometheus.Desc {
return prometheus.NewDesc(namespace+"_"+metricName, docString, nil, nil)
}

67
collector/nginx.go Normal file
View File

@ -0,0 +1,67 @@
package collector
import (
"log"
"sync"
"github.com/nginxinc/nginx-prometheus-exporter/client"
"github.com/prometheus/client_golang/prometheus"
)
// NginxCollector collects NGINX metrics. It implements prometheus.Collector interface.
type NginxCollector struct {
nginxClient *client.NginxClient
metrics map[string]*prometheus.Desc
mutex sync.Mutex
}
// NewNginxCollector creates an NginxCollector.
func NewNginxCollector(nginxClient *client.NginxClient, namespace string) *NginxCollector {
return &NginxCollector{
nginxClient: nginxClient,
metrics: map[string]*prometheus.Desc{
"connections_active": newGlobalMetric(namespace, "connections_active", "Active client connections"),
"connections_accepted": newGlobalMetric(namespace, "connections_accepted", "Accepted client connections"),
"connections_handled": newGlobalMetric(namespace, "connections_handled", "Handled client connections"),
"connections_reading": newGlobalMetric(namespace, "connections_reading", "Connections where NGINX is reading the request header"),
"connections_writing": newGlobalMetric(namespace, "connections_writing", "Connections where NGINX is writing the response back to the client"),
"connections_waiting": newGlobalMetric(namespace, "connections_waiting", "Idle client connections"),
"http_requests_total": newGlobalMetric(namespace, "http_requests_total", "Total http requests"),
},
}
}
// Describe sends the super-set of all possible descriptors of NGINX metrics
// to the provided channel.
func (c *NginxCollector) Describe(ch chan<- *prometheus.Desc) {
for _, m := range c.metrics {
ch <- m
}
}
// Collect fetches metrics from NGINX and sends them to the provided channel.
func (c *NginxCollector) Collect(ch chan<- prometheus.Metric) {
c.mutex.Lock() // To protect metrics from concurrent collects
defer c.mutex.Unlock()
stats, err := c.nginxClient.GetStubStats()
if err != nil {
log.Printf("Error getting stats: %v", err)
return
}
ch <- prometheus.MustNewConstMetric(c.metrics["connections_active"],
prometheus.GaugeValue, float64(stats.Connections.Active))
ch <- prometheus.MustNewConstMetric(c.metrics["connections_accepted"],
prometheus.CounterValue, float64(stats.Connections.Accepted))
ch <- prometheus.MustNewConstMetric(c.metrics["connections_handled"],
prometheus.CounterValue, float64(stats.Connections.Handled))
ch <- prometheus.MustNewConstMetric(c.metrics["connections_reading"],
prometheus.GaugeValue, float64(stats.Connections.Reading))
ch <- prometheus.MustNewConstMetric(c.metrics["connections_writing"],
prometheus.GaugeValue, float64(stats.Connections.Writing))
ch <- prometheus.MustNewConstMetric(c.metrics["connections_waiting"],
prometheus.GaugeValue, float64(stats.Connections.Waiting))
ch <- prometheus.MustNewConstMetric(c.metrics["http_requests_total"],
prometheus.CounterValue, float64(stats.Requests))
}

207
collector/nginx_plus.go Normal file
View File

@ -0,0 +1,207 @@
package collector
import (
"log"
"sync"
"github.com/nginxinc/nginx-prometheus-exporter/client"
"github.com/prometheus/client_golang/prometheus"
)
// NginxPlusCollector collects NGINX Plus metrics. It implements prometheus.Collector interface.
type NginxPlusCollector struct {
nginxClient *client.NginxPlusClient
totalMetrics, serverZoneMetrics, upstreamMetrics, upstreamServerMetrics map[string]*prometheus.Desc
mutex sync.Mutex
}
// NewNginxPlusCollector creates an NginxPlusCollector.
func NewNginxPlusCollector(nginxClient *client.NginxPlusClient, namespace string) *NginxPlusCollector {
return &NginxPlusCollector{
nginxClient: nginxClient,
totalMetrics: map[string]*prometheus.Desc{
"connections_accepted": newGlobalMetric(namespace, "connections_accepted", "Accepted client connections"),
"connections_dropped": newGlobalMetric(namespace, "connections_dropped", "Dropped client connections"),
"connections_active": newGlobalMetric(namespace, "connections_active", "Active client connections"),
"connections_idle": newGlobalMetric(namespace, "connections_idle", "Idle client connections"),
"http_requests_total": newGlobalMetric(namespace, "http_requests_total", "Total http requests"),
"http_requests_current": newGlobalMetric(namespace, "http_requests_current", "Current http requests"),
"ssl_handshakes": newGlobalMetric(namespace, "ssl_handshakes", "Successful SSL handshakes"),
"ssl_handshakes_failed": newGlobalMetric(namespace, "ssl_handshakes_failed", "Failed SSL handshakes"),
"ssl_session_reuses": newGlobalMetric(namespace, "ssl_session_reuses", "Session reuses during SSL handshake"),
},
serverZoneMetrics: map[string]*prometheus.Desc{
"processing": newServerZoneMetric(namespace, "processing", "Client requests that are currently being processed", nil),
"requests": newServerZoneMetric(namespace, "requests", "Total client requests", nil),
"responses_1xx": newServerZoneMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "1xx"}),
"responses_2xx": newServerZoneMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "2xx"}),
"responses_3xx": newServerZoneMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "3xx"}),
"responses_4xx": newServerZoneMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "4xx"}),
"responses_5xx": newServerZoneMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "5xx"}),
"discarded": newServerZoneMetric(namespace, "discarded", "Requests completed without sending a response", nil),
"received": newServerZoneMetric(namespace, "received", "Bytes received from clients", nil),
"sent": newServerZoneMetric(namespace, "sent", "Bytes sent to clients", nil),
},
upstreamMetrics: map[string]*prometheus.Desc{
"keepalives": newUpstreamMetric(namespace, "keepalives", "Idle keepalive connections"),
"zombies": newUpstreamMetric(namespace, "zombies", "Servers removed from the group but still processing active client requests"),
},
upstreamServerMetrics: map[string]*prometheus.Desc{
"state": newUpstreamServerMetric(namespace, "state", "Current state", nil),
"active": newUpstreamServerMetric(namespace, "active", "Active connections", nil),
"requests": newUpstreamServerMetric(namespace, "requests", "Total client requests", nil),
"responses_1xx": newUpstreamServerMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "1xx"}),
"responses_2xx": newUpstreamServerMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "2xx"}),
"responses_3xx": newUpstreamServerMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "3xx"}),
"responses_4xx": newUpstreamServerMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "4xx"}),
"responses_5xx": newUpstreamServerMetric(namespace, "responses", "Total responses sent to clients", prometheus.Labels{"code": "5xx"}),
"sent": newUpstreamServerMetric(namespace, "sent", "Bytes sent to this server", nil),
"received": newUpstreamServerMetric(namespace, "received", "Bytes received to this server", nil),
"fails": newUpstreamServerMetric(namespace, "fails", "Active connections", nil),
"unavail": newUpstreamServerMetric(namespace, "unavail", "How many times the server became unavailable for client requests (state 'unavail') due to the number of unsuccessful attempts reaching the max_fails threshold", nil),
"header_time": newUpstreamServerMetric(namespace, "header_time", "Average time to get the response header from the server", nil),
"response_time": newUpstreamServerMetric(namespace, "response_time", "Average time to get the full response from the server", nil),
"health_checks_checks": newUpstreamServerMetric(namespace, "health_checks_checks", "Total health check requests", nil),
"health_checks_fails": newUpstreamServerMetric(namespace, "health_checks_fails", "Failed health checks", nil),
"health_checks_unhealthy": newUpstreamServerMetric(namespace, "health_checks_unhealthy", "How many times the server became unhealthy (state 'unhealthy')", nil),
},
}
}
// Describe sends the super-set of all possible descriptors of NGINX Plus metrics
// to the provided channel.
func (c *NginxPlusCollector) Describe(ch chan<- *prometheus.Desc) {
for _, m := range c.totalMetrics {
ch <- m
}
for _, m := range c.serverZoneMetrics {
ch <- m
}
for _, m := range c.upstreamMetrics {
ch <- m
}
for _, m := range c.upstreamServerMetrics {
ch <- m
}
}
// Collect fetches metrics from NGINX Plus and sends them to the provided channel.
func (c *NginxPlusCollector) Collect(ch chan<- prometheus.Metric) {
c.mutex.Lock() // To protect metrics from concurrent collects
defer c.mutex.Unlock()
stats, err := c.nginxClient.GetStats()
if err != nil {
log.Printf("Error getting stats: %v", err)
return
}
ch <- prometheus.MustNewConstMetric(c.totalMetrics["connections_accepted"],
prometheus.CounterValue, float64(stats.Connections.Accepted))
ch <- prometheus.MustNewConstMetric(c.totalMetrics["connections_dropped"],
prometheus.CounterValue, float64(stats.Connections.Dropped))
ch <- prometheus.MustNewConstMetric(c.totalMetrics["connections_active"],
prometheus.GaugeValue, float64(stats.Connections.Active))
ch <- prometheus.MustNewConstMetric(c.totalMetrics["connections_idle"],
prometheus.GaugeValue, float64(stats.Connections.Idle))
ch <- prometheus.MustNewConstMetric(c.totalMetrics["http_requests_total"],
prometheus.CounterValue, float64(stats.HTTPRequests.Total))
ch <- prometheus.MustNewConstMetric(c.totalMetrics["http_requests_current"],
prometheus.GaugeValue, float64(stats.HTTPRequests.Current))
ch <- prometheus.MustNewConstMetric(c.totalMetrics["ssl_handshakes"],
prometheus.CounterValue, float64(stats.SSL.Handshakes))
ch <- prometheus.MustNewConstMetric(c.totalMetrics["ssl_handshakes_failed"],
prometheus.CounterValue, float64(stats.SSL.HandshakesFailed))
ch <- prometheus.MustNewConstMetric(c.totalMetrics["ssl_session_reuses"],
prometheus.CounterValue, float64(stats.SSL.SessionReuses))
for name, zone := range stats.ServerZones {
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["processing"],
prometheus.GaugeValue, float64(zone.Processing), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["requests"],
prometheus.CounterValue, float64(zone.Requests), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["responses_1xx"],
prometheus.CounterValue, float64(zone.Responses.Responses1xx), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["responses_2xx"],
prometheus.CounterValue, float64(zone.Responses.Responses2xx), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["responses_3xx"],
prometheus.CounterValue, float64(zone.Responses.Responses3xx), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["responses_4xx"],
prometheus.CounterValue, float64(zone.Responses.Responses4xx), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["responses_5xx"],
prometheus.CounterValue, float64(zone.Responses.Responses5xx), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["discarded"],
prometheus.CounterValue, float64(zone.Discarded), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["received"],
prometheus.CounterValue, float64(zone.Received), name)
ch <- prometheus.MustNewConstMetric(c.serverZoneMetrics["sent"],
prometheus.CounterValue, float64(zone.Sent), name)
}
for name, upstream := range stats.Upstreams {
for _, peer := range upstream.Peers {
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["state"],
prometheus.GaugeValue, upstreamServerStates[peer.State], name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["active"],
prometheus.GaugeValue, float64(peer.Active), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["requests"],
prometheus.CounterValue, float64(peer.Requests), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["responses_1xx"],
prometheus.CounterValue, float64(peer.Responses.Responses1xx), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["responses_2xx"],
prometheus.CounterValue, float64(peer.Responses.Responses2xx), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["responses_3xx"],
prometheus.CounterValue, float64(peer.Responses.Responses3xx), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["responses_4xx"],
prometheus.CounterValue, float64(peer.Responses.Responses4xx), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["responses_5xx"],
prometheus.CounterValue, float64(peer.Responses.Responses5xx), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["sent"],
prometheus.CounterValue, float64(peer.Sent), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["received"],
prometheus.CounterValue, float64(peer.Received), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["fails"],
prometheus.CounterValue, float64(peer.Fails), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["unavail"],
prometheus.CounterValue, float64(peer.Unavail), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["header_time"],
prometheus.GaugeValue, float64(peer.HeaderTime), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["response_time"],
prometheus.GaugeValue, float64(peer.ResponseTime), name, peer.Server)
if peer.HealthChecks != (client.HealthChecks{}) {
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["health_checks_checks"],
prometheus.CounterValue, float64(peer.HealthChecks.Checks), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["health_checks_fails"],
prometheus.CounterValue, float64(peer.HealthChecks.Fails), name, peer.Server)
ch <- prometheus.MustNewConstMetric(c.upstreamServerMetrics["health_checks_unhealthy"],
prometheus.CounterValue, float64(peer.HealthChecks.Unhealthy), name, peer.Server)
}
}
ch <- prometheus.MustNewConstMetric(c.upstreamMetrics["keepalives"],
prometheus.GaugeValue, float64(upstream.Keepalives), name)
ch <- prometheus.MustNewConstMetric(c.upstreamMetrics["zombies"],
prometheus.GaugeValue, float64(upstream.Zombies), name)
}
}
var upstreamServerStates = map[string]float64{
"up": 1.0,
"draining": 2.0,
"down": 3.0,
"unavail": 4.0,
"checking": 5.0,
"unhealthy": 6.0,
}
func newServerZoneMetric(namespace string, metricName string, docString string, constLabels prometheus.Labels) *prometheus.Desc {
return prometheus.NewDesc(prometheus.BuildFQName(namespace, "server_zone", metricName), docString, []string{"server_zone"}, constLabels)
}
func newUpstreamMetric(namespace string, metricName string, docString string) *prometheus.Desc {
return prometheus.NewDesc(prometheus.BuildFQName(namespace, "upstream", metricName), docString, []string{"upstream"}, nil)
}
func newUpstreamServerMetric(namespace string, metricName string, docString string, constLabels prometheus.Labels) *prometheus.Desc {
return prometheus.NewDesc(prometheus.BuildFQName(namespace, "upstream_server", metricName), docString, []string{"upstream", "server"}, constLabels)
}

62
exporter.go Normal file
View File

@ -0,0 +1,62 @@
package main
import (
"flag"
"log"
"net/http"
"github.com/nginxinc/nginx-prometheus-exporter/client"
"github.com/nginxinc/nginx-prometheus-exporter/collector"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)
var (
// Set during go build
version string
gitCommit string
// Command-line flags
listenAddr = flag.String("web.listen-address", ":9113", "An address to listen on for web interface and telemetry.")
metricsPath = flag.String("web.telemetry-path", "/metrics", "A path under which to expose metrics.")
nginxPlus = flag.Bool("nginx.plus", false, "Start the exporter for NGINX Plus. By default, the exporter is started for NGINX.")
scrapeURI = flag.String("nginx.scrape-uri", "http://127.0.0.1:8080/stub_status",
`A URI for scraping NGINX or NGINX Plus metrics.
For NGINX, the stub_status page must be available through the URI. For NGINX Plus -- the API.`)
)
func main() {
flag.Parse()
log.Printf("Starting NGINX Prometheus Exporter Version=%v GitCommit=%v", version, gitCommit)
registry := prometheus.NewRegistry()
if *nginxPlus {
client, err := client.NewNginxPlusClient(&http.Client{}, *scrapeURI)
if err != nil {
log.Fatalf("Could not create Nginx Plus Client: %v", err)
}
registry.MustRegister(collector.NewNginxPlusCollector(client, "nginxplus"))
} else {
client, err := client.NewNginxClient(&http.Client{}, *scrapeURI)
if err != nil {
log.Fatalf("Could not create Nginx Client: %v", err)
}
registry.MustRegister(collector.NewNginxCollector(client, "nginx"))
}
http.Handle(*metricsPath, promhttp.HandlerFor(registry, promhttp.HandlerOpts{}))
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write([]byte(`<html>
<head><title>NGINX Exporter</title></head>
<body>
<h1>NGINX Exporter</h1>
<p><a href='/metrics'>Metrics</a></p>
</body>
</html>`))
})
log.Fatal(http.ListenAndServe(*listenAddr, nil))
}

37
glide.lock generated Normal file
View File

@ -0,0 +1,37 @@
hash: b62c6e250515f52280a59752a7075ef981a91b95b5f4c745d101b9eded3b217d
updated: 2018-02-16T19:25:05.201189672Z
imports:
- name: github.com/beorn7/perks
version: 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
subpackages:
- quantile
- name: github.com/golang/protobuf
version: 1643683e1b54a9e88ad26d98f81400c8c9d9f4f9
subpackages:
- proto
- name: github.com/matttproud/golang_protobuf_extensions
version: c12348ce28de40eed0136aa2b644d0ee0650e56c
subpackages:
- pbutil
- name: github.com/prometheus/client_golang
version: e69720d204a4aa3b0c65dc91208645ba0a52b9cd
subpackages:
- prometheus
- prometheus/promhttp
- name: github.com/prometheus/client_model
version: 99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c
subpackages:
- go
- name: github.com/prometheus/common
version: 89604d197083d4781071d3c65855d24ecfb0a563
subpackages:
- expfmt
- internal/bitbucket.org/ww/goautoneg
- model
- name: github.com/prometheus/procfs
version: 282c8707aa210456a825798969cc27edda34992a
subpackages:
- internal/util
- nfs
- xfs
testImports: []

6
glide.yaml Normal file
View File

@ -0,0 +1,6 @@
package: github.com/nginxinc/nginx-prometheus-exporter
import:
- package: github.com/prometheus/client_golang
subpackages:
- prometheus
- prometheus/promhttp

2
vendor/github.com/beorn7/perks/.gitignore generated vendored Normal file
View File

@ -0,0 +1,2 @@
*.test
*.prof

20
vendor/github.com/beorn7/perks/LICENSE generated vendored Normal file
View File

@ -0,0 +1,20 @@
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

31
vendor/github.com/beorn7/perks/README.md generated vendored Normal file
View File

@ -0,0 +1,31 @@
# Perks for Go (golang.org)
Perks contains the Go package quantile that computes approximate quantiles over
an unbounded data stream within low memory and CPU bounds.
For more information and examples, see:
http://godoc.org/github.com/bmizerany/perks
A very special thank you and shout out to Graham Cormode (Rutgers University),
Flip Korn (AT&T LabsResearch), S. Muthukrishnan (Rutgers University), and
Divesh Srivastava (AT&T LabsResearch) for their research and publication of
[Effective Computation of Biased Quantiles over Data Streams](http://www.cs.rutgers.edu/~muthu/bquant.pdf)
Thank you, also:
* Armon Dadgar (@armon)
* Andrew Gerrand (@nf)
* Brad Fitzpatrick (@bradfitz)
* Keith Rarick (@kr)
FAQ:
Q: Why not move the quantile package into the project root?
A: I want to add more packages to perks later.
Copyright (C) 2013 Blake Mizerany
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

26
vendor/github.com/beorn7/perks/histogram/bench_test.go generated vendored Normal file
View File

@ -0,0 +1,26 @@
package histogram
import (
"math/rand"
"testing"
)
func BenchmarkInsert10Bins(b *testing.B) {
b.StopTimer()
h := New(10)
b.StartTimer()
for i := 0; i < b.N; i++ {
f := rand.ExpFloat64()
h.Insert(f)
}
}
func BenchmarkInsert100Bins(b *testing.B) {
b.StopTimer()
h := New(100)
b.StartTimer()
for i := 0; i < b.N; i++ {
f := rand.ExpFloat64()
h.Insert(f)
}
}

108
vendor/github.com/beorn7/perks/histogram/histogram.go generated vendored Normal file
View File

@ -0,0 +1,108 @@
// Package histogram provides a Go implementation of BigML's histogram package
// for Clojure/Java. It is currently experimental.
package histogram
import (
"container/heap"
"math"
"sort"
)
type Bin struct {
Count int
Sum float64
}
func (b *Bin) Update(x *Bin) {
b.Count += x.Count
b.Sum += x.Sum
}
func (b *Bin) Mean() float64 {
return b.Sum / float64(b.Count)
}
type Bins []*Bin
func (bs Bins) Len() int { return len(bs) }
func (bs Bins) Less(i, j int) bool { return bs[i].Mean() < bs[j].Mean() }
func (bs Bins) Swap(i, j int) { bs[i], bs[j] = bs[j], bs[i] }
func (bs *Bins) Push(x interface{}) {
*bs = append(*bs, x.(*Bin))
}
func (bs *Bins) Pop() interface{} {
return bs.remove(len(*bs) - 1)
}
func (bs *Bins) remove(n int) *Bin {
if n < 0 || len(*bs) < n {
return nil
}
x := (*bs)[n]
*bs = append((*bs)[:n], (*bs)[n+1:]...)
return x
}
type Histogram struct {
res *reservoir
}
func New(maxBins int) *Histogram {
return &Histogram{res: newReservoir(maxBins)}
}
func (h *Histogram) Insert(f float64) {
h.res.insert(&Bin{1, f})
h.res.compress()
}
func (h *Histogram) Bins() Bins {
return h.res.bins
}
type reservoir struct {
n int
maxBins int
bins Bins
}
func newReservoir(maxBins int) *reservoir {
return &reservoir{maxBins: maxBins}
}
func (r *reservoir) insert(bin *Bin) {
r.n += bin.Count
i := sort.Search(len(r.bins), func(i int) bool {
return r.bins[i].Mean() >= bin.Mean()
})
if i < 0 || i == r.bins.Len() {
// TODO(blake): Maybe use an .insert(i, bin) instead of
// performing the extra work of a heap.Push.
heap.Push(&r.bins, bin)
return
}
r.bins[i].Update(bin)
}
func (r *reservoir) compress() {
for r.bins.Len() > r.maxBins {
minGapIndex := -1
minGap := math.MaxFloat64
for i := 0; i < r.bins.Len()-1; i++ {
gap := gapWeight(r.bins[i], r.bins[i+1])
if minGap > gap {
minGap = gap
minGapIndex = i
}
}
prev := r.bins[minGapIndex]
next := r.bins.remove(minGapIndex + 1)
prev.Update(next)
}
}
func gapWeight(prev, next *Bin) float64 {
return next.Mean() - prev.Mean()
}

View File

@ -0,0 +1,38 @@
package histogram
import (
"math/rand"
"testing"
)
func TestHistogram(t *testing.T) {
const numPoints = 1e6
const maxBins = 3
h := New(maxBins)
for i := 0; i < numPoints; i++ {
f := rand.ExpFloat64()
h.Insert(f)
}
bins := h.Bins()
if g := len(bins); g > maxBins {
t.Fatalf("got %d bins, wanted <= %d", g, maxBins)
}
for _, b := range bins {
t.Logf("%+v", b)
}
if g := count(h.Bins()); g != numPoints {
t.Fatalf("binned %d points, wanted %d", g, numPoints)
}
}
func count(bins Bins) int {
binCounts := 0
for _, b := range bins {
binCounts += b.Count
}
return binCounts
}

63
vendor/github.com/beorn7/perks/quantile/bench_test.go generated vendored Normal file
View File

@ -0,0 +1,63 @@
package quantile
import (
"testing"
)
func BenchmarkInsertTargeted(b *testing.B) {
b.ReportAllocs()
s := NewTargeted(Targets)
b.ResetTimer()
for i := float64(0); i < float64(b.N); i++ {
s.Insert(i)
}
}
func BenchmarkInsertTargetedSmallEpsilon(b *testing.B) {
s := NewTargeted(TargetsSmallEpsilon)
b.ResetTimer()
for i := float64(0); i < float64(b.N); i++ {
s.Insert(i)
}
}
func BenchmarkInsertBiased(b *testing.B) {
s := NewLowBiased(0.01)
b.ResetTimer()
for i := float64(0); i < float64(b.N); i++ {
s.Insert(i)
}
}
func BenchmarkInsertBiasedSmallEpsilon(b *testing.B) {
s := NewLowBiased(0.0001)
b.ResetTimer()
for i := float64(0); i < float64(b.N); i++ {
s.Insert(i)
}
}
func BenchmarkQuery(b *testing.B) {
s := NewTargeted(Targets)
for i := float64(0); i < 1e6; i++ {
s.Insert(i)
}
b.ResetTimer()
n := float64(b.N)
for i := float64(0); i < n; i++ {
s.Query(i / n)
}
}
func BenchmarkQuerySmallEpsilon(b *testing.B) {
s := NewTargeted(TargetsSmallEpsilon)
for i := float64(0); i < 1e6; i++ {
s.Insert(i)
}
b.ResetTimer()
n := float64(b.N)
for i := float64(0); i < n; i++ {
s.Query(i / n)
}
}

121
vendor/github.com/beorn7/perks/quantile/example_test.go generated vendored Normal file
View File

@ -0,0 +1,121 @@
// +build go1.1
package quantile_test
import (
"bufio"
"fmt"
"log"
"os"
"strconv"
"time"
"github.com/beorn7/perks/quantile"
)
func Example_simple() {
ch := make(chan float64)
go sendFloats(ch)
// Compute the 50th, 90th, and 99th percentile.
q := quantile.NewTargeted(map[float64]float64{
0.50: 0.005,
0.90: 0.001,
0.99: 0.0001,
})
for v := range ch {
q.Insert(v)
}
fmt.Println("perc50:", q.Query(0.50))
fmt.Println("perc90:", q.Query(0.90))
fmt.Println("perc99:", q.Query(0.99))
fmt.Println("count:", q.Count())
// Output:
// perc50: 5
// perc90: 16
// perc99: 223
// count: 2388
}
func Example_mergeMultipleStreams() {
// Scenario:
// We have multiple database shards. On each shard, there is a process
// collecting query response times from the database logs and inserting
// them into a Stream (created via NewTargeted(0.90)), much like the
// Simple example. These processes expose a network interface for us to
// ask them to serialize and send us the results of their
// Stream.Samples so we may Merge and Query them.
//
// NOTES:
// * These sample sets are small, allowing us to get them
// across the network much faster than sending the entire list of data
// points.
//
// * For this to work correctly, we must supply the same quantiles
// a priori the process collecting the samples supplied to NewTargeted,
// even if we do not plan to query them all here.
ch := make(chan quantile.Samples)
getDBQuerySamples(ch)
q := quantile.NewTargeted(map[float64]float64{0.90: 0.001})
for samples := range ch {
q.Merge(samples)
}
fmt.Println("perc90:", q.Query(0.90))
}
func Example_window() {
// Scenario: We want the 90th, 95th, and 99th percentiles for each
// minute.
ch := make(chan float64)
go sendStreamValues(ch)
tick := time.NewTicker(1 * time.Minute)
q := quantile.NewTargeted(map[float64]float64{
0.90: 0.001,
0.95: 0.0005,
0.99: 0.0001,
})
for {
select {
case t := <-tick.C:
flushToDB(t, q.Samples())
q.Reset()
case v := <-ch:
q.Insert(v)
}
}
}
func sendStreamValues(ch chan float64) {
// Use your imagination
}
func flushToDB(t time.Time, samples quantile.Samples) {
// Use your imagination
}
// This is a stub for the above example. In reality this would hit the remote
// servers via http or something like it.
func getDBQuerySamples(ch chan quantile.Samples) {}
func sendFloats(ch chan<- float64) {
f, err := os.Open("exampledata.txt")
if err != nil {
log.Fatal(err)
}
sc := bufio.NewScanner(f)
for sc.Scan() {
b := sc.Bytes()
v, err := strconv.ParseFloat(string(b), 64)
if err != nil {
log.Fatal(err)
}
ch <- v
}
if sc.Err() != nil {
log.Fatal(sc.Err())
}
close(ch)
}

2388
vendor/github.com/beorn7/perks/quantile/exampledata.txt generated vendored Normal file

File diff suppressed because it is too large Load Diff

292
vendor/github.com/beorn7/perks/quantile/stream.go generated vendored Normal file
View File

@ -0,0 +1,292 @@
// Package quantile computes approximate quantiles over an unbounded data
// stream within low memory and CPU bounds.
//
// A small amount of accuracy is traded to achieve the above properties.
//
// Multiple streams can be merged before calling Query to generate a single set
// of results. This is meaningful when the streams represent the same type of
// data. See Merge and Samples.
//
// For more detailed information about the algorithm used, see:
//
// Effective Computation of Biased Quantiles over Data Streams
//
// http://www.cs.rutgers.edu/~muthu/bquant.pdf
package quantile
import (
"math"
"sort"
)
// Sample holds an observed value and meta information for compression. JSON
// tags have been added for convenience.
type Sample struct {
Value float64 `json:",string"`
Width float64 `json:",string"`
Delta float64 `json:",string"`
}
// Samples represents a slice of samples. It implements sort.Interface.
type Samples []Sample
func (a Samples) Len() int { return len(a) }
func (a Samples) Less(i, j int) bool { return a[i].Value < a[j].Value }
func (a Samples) Swap(i, j int) { a[i], a[j] = a[j], a[i] }
type invariant func(s *stream, r float64) float64
// NewLowBiased returns an initialized Stream for low-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the lower ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within (1±Epsilon)*Quantile.
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewLowBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * r
}
return newStream(ƒ)
}
// NewHighBiased returns an initialized Stream for high-biased quantiles
// (e.g. 0.01, 0.1, 0.5) where the needed quantiles are not known a priori, but
// error guarantees can still be given even for the higher ranks of the data
// distribution.
//
// The provided epsilon is a relative error, i.e. the true quantile of a value
// returned by a query is guaranteed to be within 1-(1±Epsilon)*(1-Quantile).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error
// properties.
func NewHighBiased(epsilon float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
return 2 * epsilon * (s.n - r)
}
return newStream(ƒ)
}
// NewTargeted returns an initialized Stream concerned with a particular set of
// quantile values that are supplied a priori. Knowing these a priori reduces
// space and computation time. The targets map maps the desired quantiles to
// their absolute errors, i.e. the true quantile of a value returned by a query
// is guaranteed to be within (Quantile±Epsilon).
//
// See http://www.cs.rutgers.edu/~muthu/bquant.pdf for time, space, and error properties.
func NewTargeted(targets map[float64]float64) *Stream {
ƒ := func(s *stream, r float64) float64 {
var m = math.MaxFloat64
var f float64
for quantile, epsilon := range targets {
if quantile*s.n <= r {
f = (2 * epsilon * r) / quantile
} else {
f = (2 * epsilon * (s.n - r)) / (1 - quantile)
}
if f < m {
m = f
}
}
return m
}
return newStream(ƒ)
}
// Stream computes quantiles for a stream of float64s. It is not thread-safe by
// design. Take care when using across multiple goroutines.
type Stream struct {
*stream
b Samples
sorted bool
}
func newStream(ƒ invariant) *Stream {
x := &stream{ƒ: ƒ}
return &Stream{x, make(Samples, 0, 500), true}
}
// Insert inserts v into the stream.
func (s *Stream) Insert(v float64) {
s.insert(Sample{Value: v, Width: 1})
}
func (s *Stream) insert(sample Sample) {
s.b = append(s.b, sample)
s.sorted = false
if len(s.b) == cap(s.b) {
s.flush()
}
}
// Query returns the computed qth percentiles value. If s was created with
// NewTargeted, and q is not in the set of quantiles provided a priori, Query
// will return an unspecified result.
func (s *Stream) Query(q float64) float64 {
if !s.flushed() {
// Fast path when there hasn't been enough data for a flush;
// this also yields better accuracy for small sets of data.
l := len(s.b)
if l == 0 {
return 0
}
i := int(math.Ceil(float64(l) * q))
if i > 0 {
i -= 1
}
s.maybeSort()
return s.b[i].Value
}
s.flush()
return s.stream.query(q)
}
// Merge merges samples into the underlying streams samples. This is handy when
// merging multiple streams from separate threads, database shards, etc.
//
// ATTENTION: This method is broken and does not yield correct results. The
// underlying algorithm is not capable of merging streams correctly.
func (s *Stream) Merge(samples Samples) {
sort.Sort(samples)
s.stream.merge(samples)
}
// Reset reinitializes and clears the list reusing the samples buffer memory.
func (s *Stream) Reset() {
s.stream.reset()
s.b = s.b[:0]
}
// Samples returns stream samples held by s.
func (s *Stream) Samples() Samples {
if !s.flushed() {
return s.b
}
s.flush()
return s.stream.samples()
}
// Count returns the total number of samples observed in the stream
// since initialization.
func (s *Stream) Count() int {
return len(s.b) + s.stream.count()
}
func (s *Stream) flush() {
s.maybeSort()
s.stream.merge(s.b)
s.b = s.b[:0]
}
func (s *Stream) maybeSort() {
if !s.sorted {
s.sorted = true
sort.Sort(s.b)
}
}
func (s *Stream) flushed() bool {
return len(s.stream.l) > 0
}
type stream struct {
n float64
l []Sample
ƒ invariant
}
func (s *stream) reset() {
s.l = s.l[:0]
s.n = 0
}
func (s *stream) insert(v float64) {
s.merge(Samples{{v, 1, 0}})
}
func (s *stream) merge(samples Samples) {
// TODO(beorn7): This tries to merge not only individual samples, but
// whole summaries. The paper doesn't mention merging summaries at
// all. Unittests show that the merging is inaccurate. Find out how to
// do merges properly.
var r float64
i := 0
for _, sample := range samples {
for ; i < len(s.l); i++ {
c := s.l[i]
if c.Value > sample.Value {
// Insert at position i.
s.l = append(s.l, Sample{})
copy(s.l[i+1:], s.l[i:])
s.l[i] = Sample{
sample.Value,
sample.Width,
math.Max(sample.Delta, math.Floor(s.ƒ(s, r))-1),
// TODO(beorn7): How to calculate delta correctly?
}
i++
goto inserted
}
r += c.Width
}
s.l = append(s.l, Sample{sample.Value, sample.Width, 0})
i++
inserted:
s.n += sample.Width
r += sample.Width
}
s.compress()
}
func (s *stream) count() int {
return int(s.n)
}
func (s *stream) query(q float64) float64 {
t := math.Ceil(q * s.n)
t += math.Ceil(s.ƒ(s, t) / 2)
p := s.l[0]
var r float64
for _, c := range s.l[1:] {
r += p.Width
if r+c.Width+c.Delta > t {
return p.Value
}
p = c
}
return p.Value
}
func (s *stream) compress() {
if len(s.l) < 2 {
return
}
x := s.l[len(s.l)-1]
xi := len(s.l) - 1
r := s.n - 1 - x.Width
for i := len(s.l) - 2; i >= 0; i-- {
c := s.l[i]
if c.Width+x.Width+x.Delta <= s.ƒ(s, r) {
x.Width += c.Width
s.l[xi] = x
// Remove element at i.
copy(s.l[i:], s.l[i+1:])
s.l = s.l[:len(s.l)-1]
xi -= 1
} else {
x = c
xi = i
}
r -= c.Width
}
}
func (s *stream) samples() Samples {
samples := make(Samples, len(s.l))
copy(samples, s.l)
return samples
}

215
vendor/github.com/beorn7/perks/quantile/stream_test.go generated vendored Normal file
View File

@ -0,0 +1,215 @@
package quantile
import (
"math"
"math/rand"
"sort"
"testing"
)
var (
Targets = map[float64]float64{
0.01: 0.001,
0.10: 0.01,
0.50: 0.05,
0.90: 0.01,
0.99: 0.001,
}
TargetsSmallEpsilon = map[float64]float64{
0.01: 0.0001,
0.10: 0.001,
0.50: 0.005,
0.90: 0.001,
0.99: 0.0001,
}
LowQuantiles = []float64{0.01, 0.1, 0.5}
HighQuantiles = []float64{0.99, 0.9, 0.5}
)
const RelativeEpsilon = 0.01
func verifyPercsWithAbsoluteEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for quantile, epsilon := range Targets {
n := float64(len(a))
k := int(quantile * n)
if k < 1 {
k = 1
}
lower := int((quantile - epsilon) * n)
if lower < 1 {
lower = 1
}
upper := int(math.Ceil((quantile + epsilon) * n))
if upper > len(a) {
upper = len(a)
}
w, min, max := a[k-1], a[lower-1], a[upper-1]
if g := s.Query(quantile); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", quantile, w, min, max, g)
}
}
}
func verifyLowPercsWithRelativeEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for _, qu := range LowQuantiles {
n := float64(len(a))
k := int(qu * n)
lowerRank := int((1 - RelativeEpsilon) * qu * n)
upperRank := int(math.Ceil((1 + RelativeEpsilon) * qu * n))
w, min, max := a[k-1], a[lowerRank-1], a[upperRank-1]
if g := s.Query(qu); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", qu, w, min, max, g)
}
}
}
func verifyHighPercsWithRelativeEpsilon(t *testing.T, a []float64, s *Stream) {
sort.Float64s(a)
for _, qu := range HighQuantiles {
n := float64(len(a))
k := int(qu * n)
lowerRank := int((1 - (1+RelativeEpsilon)*(1-qu)) * n)
upperRank := int(math.Ceil((1 - (1-RelativeEpsilon)*(1-qu)) * n))
w, min, max := a[k-1], a[lowerRank-1], a[upperRank-1]
if g := s.Query(qu); g < min || g > max {
t.Errorf("q=%f: want %v [%f,%f], got %v", qu, w, min, max, g)
}
}
}
func populateStream(s *Stream) []float64 {
a := make([]float64, 0, 1e5+100)
for i := 0; i < cap(a); i++ {
v := rand.NormFloat64()
// Add 5% asymmetric outliers.
if i%20 == 0 {
v = v*v + 1
}
s.Insert(v)
a = append(a, v)
}
return a
}
func TestTargetedQuery(t *testing.T) {
rand.Seed(42)
s := NewTargeted(Targets)
a := populateStream(s)
verifyPercsWithAbsoluteEpsilon(t, a, s)
}
func TestTargetedQuerySmallSampleSize(t *testing.T) {
rand.Seed(42)
s := NewTargeted(TargetsSmallEpsilon)
a := []float64{1, 2, 3, 4, 5}
for _, v := range a {
s.Insert(v)
}
verifyPercsWithAbsoluteEpsilon(t, a, s)
// If not yet flushed, results should be precise:
if !s.flushed() {
for φ, want := range map[float64]float64{
0.01: 1,
0.10: 1,
0.50: 3,
0.90: 5,
0.99: 5,
} {
if got := s.Query(φ); got != want {
t.Errorf("want %f for φ=%f, got %f", want, φ, got)
}
}
}
}
func TestLowBiasedQuery(t *testing.T) {
rand.Seed(42)
s := NewLowBiased(RelativeEpsilon)
a := populateStream(s)
verifyLowPercsWithRelativeEpsilon(t, a, s)
}
func TestHighBiasedQuery(t *testing.T) {
rand.Seed(42)
s := NewHighBiased(RelativeEpsilon)
a := populateStream(s)
verifyHighPercsWithRelativeEpsilon(t, a, s)
}
// BrokenTestTargetedMerge is broken, see Merge doc comment.
func BrokenTestTargetedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewTargeted(Targets)
s2 := NewTargeted(Targets)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyPercsWithAbsoluteEpsilon(t, a, s1)
}
// BrokenTestLowBiasedMerge is broken, see Merge doc comment.
func BrokenTestLowBiasedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewLowBiased(RelativeEpsilon)
s2 := NewLowBiased(RelativeEpsilon)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyLowPercsWithRelativeEpsilon(t, a, s2)
}
// BrokenTestHighBiasedMerge is broken, see Merge doc comment.
func BrokenTestHighBiasedMerge(t *testing.T) {
rand.Seed(42)
s1 := NewHighBiased(RelativeEpsilon)
s2 := NewHighBiased(RelativeEpsilon)
a := populateStream(s1)
a = append(a, populateStream(s2)...)
s1.Merge(s2.Samples())
verifyHighPercsWithRelativeEpsilon(t, a, s2)
}
func TestUncompressed(t *testing.T) {
q := NewTargeted(Targets)
for i := 100; i > 0; i-- {
q.Insert(float64(i))
}
if g := q.Count(); g != 100 {
t.Errorf("want count 100, got %d", g)
}
// Before compression, Query should have 100% accuracy.
for quantile := range Targets {
w := quantile * 100
if g := q.Query(quantile); g != w {
t.Errorf("want %f, got %f", w, g)
}
}
}
func TestUncompressedSamples(t *testing.T) {
q := NewTargeted(map[float64]float64{0.99: 0.001})
for i := 1; i <= 100; i++ {
q.Insert(float64(i))
}
if g := q.Samples().Len(); g != 100 {
t.Errorf("want count 100, got %d", g)
}
}
func TestUncompressedOne(t *testing.T) {
q := NewTargeted(map[float64]float64{0.99: 0.01})
q.Insert(3.14)
if g := q.Query(0.90); g != 3.14 {
t.Error("want PI, got", g)
}
}
func TestDefaults(t *testing.T) {
if g := NewTargeted(map[float64]float64{0.99: 0.001}).Query(0.99); g != 0 {
t.Errorf("want 0, got %f", g)
}
}

90
vendor/github.com/beorn7/perks/topk/topk.go generated vendored Normal file
View File

@ -0,0 +1,90 @@
package topk
import (
"sort"
)
// http://www.cs.ucsb.edu/research/tech_reports/reports/2005-23.pdf
type Element struct {
Value string
Count int
}
type Samples []*Element
func (sm Samples) Len() int {
return len(sm)
}
func (sm Samples) Less(i, j int) bool {
return sm[i].Count < sm[j].Count
}
func (sm Samples) Swap(i, j int) {
sm[i], sm[j] = sm[j], sm[i]
}
type Stream struct {
k int
mon map[string]*Element
// the minimum Element
min *Element
}
func New(k int) *Stream {
s := new(Stream)
s.k = k
s.mon = make(map[string]*Element)
s.min = &Element{}
// Track k+1 so that less frequenet items contended for that spot,
// resulting in k being more accurate.
return s
}
func (s *Stream) Insert(x string) {
s.insert(&Element{x, 1})
}
func (s *Stream) Merge(sm Samples) {
for _, e := range sm {
s.insert(e)
}
}
func (s *Stream) insert(in *Element) {
e := s.mon[in.Value]
if e != nil {
e.Count++
} else {
if len(s.mon) < s.k+1 {
e = &Element{in.Value, in.Count}
s.mon[in.Value] = e
} else {
e = s.min
delete(s.mon, e.Value)
e.Value = in.Value
e.Count += in.Count
s.min = e
}
}
if e.Count < s.min.Count {
s.min = e
}
}
func (s *Stream) Query() Samples {
var sm Samples
for _, e := range s.mon {
sm = append(sm, e)
}
sort.Sort(sort.Reverse(sm))
if len(sm) < s.k {
return sm
}
return sm[:s.k]
}

57
vendor/github.com/beorn7/perks/topk/topk_test.go generated vendored Normal file
View File

@ -0,0 +1,57 @@
package topk
import (
"fmt"
"math/rand"
"sort"
"testing"
)
func TestTopK(t *testing.T) {
stream := New(10)
ss := []*Stream{New(10), New(10), New(10)}
m := make(map[string]int)
for _, s := range ss {
for i := 0; i < 1e6; i++ {
v := fmt.Sprintf("%x", int8(rand.ExpFloat64()))
s.Insert(v)
m[v]++
}
stream.Merge(s.Query())
}
var sm Samples
for x, s := range m {
sm = append(sm, &Element{x, s})
}
sort.Sort(sort.Reverse(sm))
g := stream.Query()
if len(g) != 10 {
t.Fatalf("got %d, want 10", len(g))
}
for i, e := range g {
if sm[i].Value != e.Value {
t.Errorf("at %d: want %q, got %q", i, sm[i].Value, e.Value)
}
}
}
func TestQuery(t *testing.T) {
queryTests := []struct {
value string
expected int
}{
{"a", 1},
{"b", 2},
{"c", 2},
}
stream := New(2)
for _, tt := range queryTests {
stream.Insert(tt.value)
if n := len(stream.Query()); n != tt.expected {
t.Errorf("want %d, got %d", tt.expected, n)
}
}
}

16
vendor/github.com/golang/protobuf/.gitignore generated vendored Normal file
View File

@ -0,0 +1,16 @@
.DS_Store
*.[568ao]
*.ao
*.so
*.pyc
._*
.nfs.*
[568a].out
*~
*.orig
core
_obj
_test
_testmain.go
protoc-gen-go/testdata/multi/*.pb.go
_conformance/_conformance

18
vendor/github.com/golang/protobuf/.travis.yml generated vendored Normal file
View File

@ -0,0 +1,18 @@
sudo: false
language: go
go:
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
install:
- go get -v -d -t github.com/golang/protobuf/...
- curl -L https://github.com/google/protobuf/releases/download/v3.3.0/protoc-3.3.0-linux-x86_64.zip -o /tmp/protoc.zip
- unzip /tmp/protoc.zip -d $HOME/protoc
env:
- PATH=$HOME/protoc/bin:$PATH
script:
- make all test

3
vendor/github.com/golang/protobuf/AUTHORS generated vendored Normal file
View File

@ -0,0 +1,3 @@
# This source code refers to The Go Authors for copyright purposes.
# The master list of authors is in the main Go distribution,
# visible at http://tip.golang.org/AUTHORS.

3
vendor/github.com/golang/protobuf/CONTRIBUTORS generated vendored Normal file
View File

@ -0,0 +1,3 @@
# This source code was written by the Go contributors.
# The master list of contributors is in the main Go distribution,
# visible at http://tip.golang.org/CONTRIBUTORS.

31
vendor/github.com/golang/protobuf/LICENSE generated vendored Normal file
View File

@ -0,0 +1,31 @@
Go support for Protocol Buffers - Google's data interchange format
Copyright 2010 The Go Authors. All rights reserved.
https://github.com/golang/protobuf
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

40
vendor/github.com/golang/protobuf/Make.protobuf generated vendored Normal file
View File

@ -0,0 +1,40 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Includable Makefile to add a rule for generating .pb.go files from .proto files
# (Google protocol buffer descriptions).
# Typical use if myproto.proto is a file in package mypackage in this directory:
#
# include $(GOROOT)/src/pkg/github.com/golang/protobuf/Make.protobuf
%.pb.go: %.proto
protoc --go_out=. $<

55
vendor/github.com/golang/protobuf/Makefile generated vendored Normal file
View File

@ -0,0 +1,55 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
all: install
install:
go install ./proto ./jsonpb ./ptypes
go install ./protoc-gen-go
test:
go test ./proto ./jsonpb ./ptypes
make -C protoc-gen-go/testdata test
clean:
go clean ./...
nuke:
go clean -i ./...
regenerate:
make -C protoc-gen-go/descriptor regenerate
make -C protoc-gen-go/plugin regenerate
make -C protoc-gen-go/testdata regenerate
make -C proto/testdata regenerate
make -C jsonpb/jsonpb_test_proto regenerate
make -C _conformance regenerate

244
vendor/github.com/golang/protobuf/README.md generated vendored Normal file
View File

@ -0,0 +1,244 @@
# Go support for Protocol Buffers
[![Build Status](https://travis-ci.org/golang/protobuf.svg?branch=master)](https://travis-ci.org/golang/protobuf)
[![GoDoc](https://godoc.org/github.com/golang/protobuf?status.svg)](https://godoc.org/github.com/golang/protobuf)
Google's data interchange format.
Copyright 2010 The Go Authors.
https://github.com/golang/protobuf
This package and the code it generates requires at least Go 1.4.
This software implements Go bindings for protocol buffers. For
information about protocol buffers themselves, see
https://developers.google.com/protocol-buffers/
## Installation ##
To use this software, you must:
- Install the standard C++ implementation of protocol buffers from
https://developers.google.com/protocol-buffers/
- Of course, install the Go compiler and tools from
https://golang.org/
See
https://golang.org/doc/install
for details or, if you are using gccgo, follow the instructions at
https://golang.org/doc/install/gccgo
- Grab the code from the repository and install the proto package.
The simplest way is to run `go get -u github.com/golang/protobuf/protoc-gen-go`.
The compiler plugin, protoc-gen-go, will be installed in $GOBIN,
defaulting to $GOPATH/bin. It must be in your $PATH for the protocol
compiler, protoc, to find it.
This software has two parts: a 'protocol compiler plugin' that
generates Go source files that, once compiled, can access and manage
protocol buffers; and a library that implements run-time support for
encoding (marshaling), decoding (unmarshaling), and accessing protocol
buffers.
There is support for gRPC in Go using protocol buffers.
See the note at the bottom of this file for details.
There are no insertion points in the plugin.
## Using protocol buffers with Go ##
Once the software is installed, there are two steps to using it.
First you must compile the protocol buffer definitions and then import
them, with the support library, into your program.
To compile the protocol buffer definition, run protoc with the --go_out
parameter set to the directory you want to output the Go code to.
protoc --go_out=. *.proto
The generated files will be suffixed .pb.go. See the Test code below
for an example using such a file.
The package comment for the proto library contains text describing
the interface provided in Go for protocol buffers. Here is an edited
version.
==========
The proto package converts data structures to and from the
wire format of protocol buffers. It works in concert with the
Go source code generated for .proto files by the protocol compiler.
A summary of the properties of the protocol buffer interface
for a protocol buffer variable v:
- Names are turned from camel_case to CamelCase for export.
- There are no methods on v to set fields; just treat
them as structure fields.
- There are getters that return a field's value if set,
and return the field's default value if unset.
The getters work even if the receiver is a nil message.
- The zero value for a struct is its correct initialization state.
All desired fields must be set before marshaling.
- A Reset() method will restore a protobuf struct to its zero state.
- Non-repeated fields are pointers to the values; nil means unset.
That is, optional or required field int32 f becomes F *int32.
- Repeated fields are slices.
- Helper functions are available to aid the setting of fields.
Helpers for getting values are superseded by the
GetFoo methods and their use is deprecated.
msg.Foo = proto.String("hello") // set field
- Constants are defined to hold the default values of all fields that
have them. They have the form Default_StructName_FieldName.
Because the getter methods handle defaulted values,
direct use of these constants should be rare.
- Enums are given type names and maps from names to values.
Enum values are prefixed with the enum's type name. Enum types have
a String method, and a Enum method to assist in message construction.
- Nested groups and enums have type names prefixed with the name of
the surrounding message type.
- Extensions are given descriptor names that start with E_,
followed by an underscore-delimited list of the nested messages
that contain it (if any) followed by the CamelCased name of the
extension field itself. HasExtension, ClearExtension, GetExtension
and SetExtension are functions for manipulating extensions.
- Oneof field sets are given a single field in their message,
with distinguished wrapper types for each possible field value.
- Marshal and Unmarshal are functions to encode and decode the wire format.
When the .proto file specifies `syntax="proto3"`, there are some differences:
- Non-repeated fields of non-message type are values instead of pointers.
- Enum types do not get an Enum method.
Consider file test.proto, containing
```proto
syntax = "proto2";
package example;
enum FOO { X = 17; };
message Test {
required string label = 1;
optional int32 type = 2 [default=77];
repeated int64 reps = 3;
optional group OptionalGroup = 4 {
required string RequiredField = 5;
}
}
```
To create and play with a Test object from the example package,
```go
package main
import (
"log"
"github.com/golang/protobuf/proto"
"path/to/example"
)
func main() {
test := &example.Test {
Label: proto.String("hello"),
Type: proto.Int32(17),
Reps: []int64{1, 2, 3},
Optionalgroup: &example.Test_OptionalGroup {
RequiredField: proto.String("good bye"),
},
}
data, err := proto.Marshal(test)
if err != nil {
log.Fatal("marshaling error: ", err)
}
newTest := &example.Test{}
err = proto.Unmarshal(data, newTest)
if err != nil {
log.Fatal("unmarshaling error: ", err)
}
// Now test and newTest contain the same data.
if test.GetLabel() != newTest.GetLabel() {
log.Fatalf("data mismatch %q != %q", test.GetLabel(), newTest.GetLabel())
}
// etc.
}
```
## Parameters ##
To pass extra parameters to the plugin, use a comma-separated
parameter list separated from the output directory by a colon:
protoc --go_out=plugins=grpc,import_path=mypackage:. *.proto
- `import_prefix=xxx` - a prefix that is added onto the beginning of
all imports. Useful for things like generating protos in a
subdirectory, or regenerating vendored protobufs in-place.
- `import_path=foo/bar` - used as the package if no input files
declare `go_package`. If it contains slashes, everything up to the
rightmost slash is ignored.
- `plugins=plugin1+plugin2` - specifies the list of sub-plugins to
load. The only plugin in this repo is `grpc`.
- `Mfoo/bar.proto=quux/shme` - declares that foo/bar.proto is
associated with Go package quux/shme. This is subject to the
import_prefix parameter.
## gRPC Support ##
If a proto file specifies RPC services, protoc-gen-go can be instructed to
generate code compatible with gRPC (http://www.grpc.io/). To do this, pass
the `plugins` parameter to protoc-gen-go; the usual way is to insert it into
the --go_out argument to protoc:
protoc --go_out=plugins=grpc:. *.proto
## Compatibility ##
The library and the generated code are expected to be stable over time.
However, we reserve the right to make breaking changes without notice for the
following reasons:
- Security. A security issue in the specification or implementation may come to
light whose resolution requires breaking compatibility. We reserve the right
to address such security issues.
- Unspecified behavior. There are some aspects of the Protocol Buffers
specification that are undefined. Programs that depend on such unspecified
behavior may break in future releases.
- Specification errors or changes. If it becomes necessary to address an
inconsistency, incompleteness, or change in the Protocol Buffers
specification, resolving the issue could affect the meaning or legality of
existing programs. We reserve the right to address such issues, including
updating the implementations.
- Bugs. If the library has a bug that violates the specification, a program
that depends on the buggy behavior may break if the bug is fixed. We reserve
the right to fix such bugs.
- Adding methods or fields to generated structs. These may conflict with field
names that already exist in a schema, causing applications to break. When the
code generator encounters a field in the schema that would collide with a
generated field or method name, the code generator will append an underscore
to the generated field or method name.
- Adding, removing, or changing methods or fields in generated structs that
start with `XXX`. These parts of the generated code are exported out of
necessity, but should not be considered part of the public API.
- Adding, removing, or changing unexported symbols in generated code.
Any breaking changes outside of these will be announced 6 months in advance to
protobuf@googlegroups.com.
You should, whenever possible, use generated code created by the `protoc-gen-go`
tool built at the same commit as the `proto` package. The `proto` package
declares package-level constants in the form `ProtoPackageIsVersionX`.
Application code and generated code may depend on one of these constants to
ensure that compilation will fail if the available version of the proto library
is too old. Whenever we make a change to the generated code that requires newer
library support, in the same commit we will increment the version number of the
generated code and declare a new package-level constant whose name incorporates
the latest version number. Removing a compatibility constant is considered a
breaking change and would be subject to the announcement policy stated above.
The `protoc-gen-go/generator` package exposes a plugin interface,
which is used by the gRPC code generation. This interface is not
supported and is subject to incompatible changes without notice.

View File

@ -0,0 +1,33 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2016 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
regenerate:
protoc --go_out=Mgoogle/protobuf/any.proto=github.com/golang/protobuf/ptypes/any,Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration,Mgoogle/protobuf/struct.proto=github.com/golang/protobuf/ptypes/struct,Mgoogle/protobuf/timestamp.proto=github.com/golang/protobuf/ptypes/timestamp,Mgoogle/protobuf/wrappers.proto=github.com/golang/protobuf/ptypes/wrappers,Mgoogle/protobuf/field_mask.proto=google.golang.org/genproto/protobuf:. conformance_proto/conformance.proto

View File

@ -0,0 +1,161 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// conformance implements the conformance test subprocess protocol as
// documented in conformance.proto.
package main
import (
"encoding/binary"
"fmt"
"io"
"os"
pb "github.com/golang/protobuf/_conformance/conformance_proto"
"github.com/golang/protobuf/jsonpb"
"github.com/golang/protobuf/proto"
)
func main() {
var sizeBuf [4]byte
inbuf := make([]byte, 0, 4096)
outbuf := proto.NewBuffer(nil)
for {
if _, err := io.ReadFull(os.Stdin, sizeBuf[:]); err == io.EOF {
break
} else if err != nil {
fmt.Fprintln(os.Stderr, "go conformance: read request:", err)
os.Exit(1)
}
size := binary.LittleEndian.Uint32(sizeBuf[:])
if int(size) > cap(inbuf) {
inbuf = make([]byte, size)
}
inbuf = inbuf[:size]
if _, err := io.ReadFull(os.Stdin, inbuf); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: read request:", err)
os.Exit(1)
}
req := new(pb.ConformanceRequest)
if err := proto.Unmarshal(inbuf, req); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: parse request:", err)
os.Exit(1)
}
res := handle(req)
if err := outbuf.Marshal(res); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: marshal response:", err)
os.Exit(1)
}
binary.LittleEndian.PutUint32(sizeBuf[:], uint32(len(outbuf.Bytes())))
if _, err := os.Stdout.Write(sizeBuf[:]); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: write response:", err)
os.Exit(1)
}
if _, err := os.Stdout.Write(outbuf.Bytes()); err != nil {
fmt.Fprintln(os.Stderr, "go conformance: write response:", err)
os.Exit(1)
}
outbuf.Reset()
}
}
var jsonMarshaler = jsonpb.Marshaler{
OrigName: true,
}
func handle(req *pb.ConformanceRequest) *pb.ConformanceResponse {
var err error
var msg pb.TestAllTypes
switch p := req.Payload.(type) {
case *pb.ConformanceRequest_ProtobufPayload:
err = proto.Unmarshal(p.ProtobufPayload, &msg)
case *pb.ConformanceRequest_JsonPayload:
err = jsonpb.UnmarshalString(p.JsonPayload, &msg)
if err != nil && err.Error() == "unmarshaling Any not supported yet" {
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_Skipped{
Skipped: err.Error(),
},
}
}
default:
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_RuntimeError{
RuntimeError: "unknown request payload type",
},
}
}
if err != nil {
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_ParseError{
ParseError: err.Error(),
},
}
}
switch req.RequestedOutputFormat {
case pb.WireFormat_PROTOBUF:
p, err := proto.Marshal(&msg)
if err != nil {
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_SerializeError{
SerializeError: err.Error(),
},
}
}
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_ProtobufPayload{
ProtobufPayload: p,
},
}
case pb.WireFormat_JSON:
p, err := jsonMarshaler.MarshalToString(&msg)
if err != nil {
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_SerializeError{
SerializeError: err.Error(),
},
}
}
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_JsonPayload{
JsonPayload: p,
},
}
default:
return &pb.ConformanceResponse{
Result: &pb.ConformanceResponse_RuntimeError{
RuntimeError: "unknown output format",
},
}
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,285 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax = "proto3";
package conformance;
option java_package = "com.google.protobuf.conformance";
import "google/protobuf/any.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/field_mask.proto";
import "google/protobuf/struct.proto";
import "google/protobuf/timestamp.proto";
import "google/protobuf/wrappers.proto";
// This defines the conformance testing protocol. This protocol exists between
// the conformance test suite itself and the code being tested. For each test,
// the suite will send a ConformanceRequest message and expect a
// ConformanceResponse message.
//
// You can either run the tests in two different ways:
//
// 1. in-process (using the interface in conformance_test.h).
//
// 2. as a sub-process communicating over a pipe. Information about how to
// do this is in conformance_test_runner.cc.
//
// Pros/cons of the two approaches:
//
// - running as a sub-process is much simpler for languages other than C/C++.
//
// - running as a sub-process may be more tricky in unusual environments like
// iOS apps, where fork/stdin/stdout are not available.
enum WireFormat {
UNSPECIFIED = 0;
PROTOBUF = 1;
JSON = 2;
}
// Represents a single test case's input. The testee should:
//
// 1. parse this proto (which should always succeed)
// 2. parse the protobuf or JSON payload in "payload" (which may fail)
// 3. if the parse succeeded, serialize the message in the requested format.
message ConformanceRequest {
// The payload (whether protobuf of JSON) is always for a TestAllTypes proto
// (see below).
oneof payload {
bytes protobuf_payload = 1;
string json_payload = 2;
}
// Which format should the testee serialize its message to?
WireFormat requested_output_format = 3;
}
// Represents a single test case's output.
message ConformanceResponse {
oneof result {
// This string should be set to indicate parsing failed. The string can
// provide more information about the parse error if it is available.
//
// Setting this string does not necessarily mean the testee failed the
// test. Some of the test cases are intentionally invalid input.
string parse_error = 1;
// If the input was successfully parsed but errors occurred when
// serializing it to the requested output format, set the error message in
// this field.
string serialize_error = 6;
// This should be set if some other error occurred. This will always
// indicate that the test failed. The string can provide more information
// about the failure.
string runtime_error = 2;
// If the input was successfully parsed and the requested output was
// protobuf, serialize it to protobuf and set it in this field.
bytes protobuf_payload = 3;
// If the input was successfully parsed and the requested output was JSON,
// serialize to JSON and set it in this field.
string json_payload = 4;
// For when the testee skipped the test, likely because a certain feature
// wasn't supported, like JSON input/output.
string skipped = 5;
}
}
// This proto includes every type of field in both singular and repeated
// forms.
message TestAllTypes {
message NestedMessage {
int32 a = 1;
TestAllTypes corecursive = 2;
}
enum NestedEnum {
FOO = 0;
BAR = 1;
BAZ = 2;
NEG = -1; // Intentionally negative.
}
// Singular
int32 optional_int32 = 1;
int64 optional_int64 = 2;
uint32 optional_uint32 = 3;
uint64 optional_uint64 = 4;
sint32 optional_sint32 = 5;
sint64 optional_sint64 = 6;
fixed32 optional_fixed32 = 7;
fixed64 optional_fixed64 = 8;
sfixed32 optional_sfixed32 = 9;
sfixed64 optional_sfixed64 = 10;
float optional_float = 11;
double optional_double = 12;
bool optional_bool = 13;
string optional_string = 14;
bytes optional_bytes = 15;
NestedMessage optional_nested_message = 18;
ForeignMessage optional_foreign_message = 19;
NestedEnum optional_nested_enum = 21;
ForeignEnum optional_foreign_enum = 22;
string optional_string_piece = 24 [ctype=STRING_PIECE];
string optional_cord = 25 [ctype=CORD];
TestAllTypes recursive_message = 27;
// Repeated
repeated int32 repeated_int32 = 31;
repeated int64 repeated_int64 = 32;
repeated uint32 repeated_uint32 = 33;
repeated uint64 repeated_uint64 = 34;
repeated sint32 repeated_sint32 = 35;
repeated sint64 repeated_sint64 = 36;
repeated fixed32 repeated_fixed32 = 37;
repeated fixed64 repeated_fixed64 = 38;
repeated sfixed32 repeated_sfixed32 = 39;
repeated sfixed64 repeated_sfixed64 = 40;
repeated float repeated_float = 41;
repeated double repeated_double = 42;
repeated bool repeated_bool = 43;
repeated string repeated_string = 44;
repeated bytes repeated_bytes = 45;
repeated NestedMessage repeated_nested_message = 48;
repeated ForeignMessage repeated_foreign_message = 49;
repeated NestedEnum repeated_nested_enum = 51;
repeated ForeignEnum repeated_foreign_enum = 52;
repeated string repeated_string_piece = 54 [ctype=STRING_PIECE];
repeated string repeated_cord = 55 [ctype=CORD];
// Map
map < int32, int32> map_int32_int32 = 56;
map < int64, int64> map_int64_int64 = 57;
map < uint32, uint32> map_uint32_uint32 = 58;
map < uint64, uint64> map_uint64_uint64 = 59;
map < sint32, sint32> map_sint32_sint32 = 60;
map < sint64, sint64> map_sint64_sint64 = 61;
map < fixed32, fixed32> map_fixed32_fixed32 = 62;
map < fixed64, fixed64> map_fixed64_fixed64 = 63;
map <sfixed32, sfixed32> map_sfixed32_sfixed32 = 64;
map <sfixed64, sfixed64> map_sfixed64_sfixed64 = 65;
map < int32, float> map_int32_float = 66;
map < int32, double> map_int32_double = 67;
map < bool, bool> map_bool_bool = 68;
map < string, string> map_string_string = 69;
map < string, bytes> map_string_bytes = 70;
map < string, NestedMessage> map_string_nested_message = 71;
map < string, ForeignMessage> map_string_foreign_message = 72;
map < string, NestedEnum> map_string_nested_enum = 73;
map < string, ForeignEnum> map_string_foreign_enum = 74;
oneof oneof_field {
uint32 oneof_uint32 = 111;
NestedMessage oneof_nested_message = 112;
string oneof_string = 113;
bytes oneof_bytes = 114;
bool oneof_bool = 115;
uint64 oneof_uint64 = 116;
float oneof_float = 117;
double oneof_double = 118;
NestedEnum oneof_enum = 119;
}
// Well-known types
google.protobuf.BoolValue optional_bool_wrapper = 201;
google.protobuf.Int32Value optional_int32_wrapper = 202;
google.protobuf.Int64Value optional_int64_wrapper = 203;
google.protobuf.UInt32Value optional_uint32_wrapper = 204;
google.protobuf.UInt64Value optional_uint64_wrapper = 205;
google.protobuf.FloatValue optional_float_wrapper = 206;
google.protobuf.DoubleValue optional_double_wrapper = 207;
google.protobuf.StringValue optional_string_wrapper = 208;
google.protobuf.BytesValue optional_bytes_wrapper = 209;
repeated google.protobuf.BoolValue repeated_bool_wrapper = 211;
repeated google.protobuf.Int32Value repeated_int32_wrapper = 212;
repeated google.protobuf.Int64Value repeated_int64_wrapper = 213;
repeated google.protobuf.UInt32Value repeated_uint32_wrapper = 214;
repeated google.protobuf.UInt64Value repeated_uint64_wrapper = 215;
repeated google.protobuf.FloatValue repeated_float_wrapper = 216;
repeated google.protobuf.DoubleValue repeated_double_wrapper = 217;
repeated google.protobuf.StringValue repeated_string_wrapper = 218;
repeated google.protobuf.BytesValue repeated_bytes_wrapper = 219;
google.protobuf.Duration optional_duration = 301;
google.protobuf.Timestamp optional_timestamp = 302;
google.protobuf.FieldMask optional_field_mask = 303;
google.protobuf.Struct optional_struct = 304;
google.protobuf.Any optional_any = 305;
google.protobuf.Value optional_value = 306;
repeated google.protobuf.Duration repeated_duration = 311;
repeated google.protobuf.Timestamp repeated_timestamp = 312;
repeated google.protobuf.FieldMask repeated_fieldmask = 313;
repeated google.protobuf.Struct repeated_struct = 324;
repeated google.protobuf.Any repeated_any = 315;
repeated google.protobuf.Value repeated_value = 316;
// Test field-name-to-JSON-name convention.
// (protobuf says names can be any valid C/C++ identifier.)
int32 fieldname1 = 401;
int32 field_name2 = 402;
int32 _field_name3 = 403;
int32 field__name4_ = 404;
int32 field0name5 = 405;
int32 field_0_name6 = 406;
int32 fieldName7 = 407;
int32 FieldName8 = 408;
int32 field_Name9 = 409;
int32 Field_Name10 = 410;
int32 FIELD_NAME11 = 411;
int32 FIELD_name12 = 412;
int32 __field_name13 = 413;
int32 __Field_name14 = 414;
int32 field__name15 = 415;
int32 field__Name16 = 416;
int32 field_name17__ = 417;
int32 Field_name18__ = 418;
}
message ForeignMessage {
int32 c = 1;
}
enum ForeignEnum {
FOREIGN_FOO = 0;
FOREIGN_BAR = 1;
FOREIGN_BAZ = 2;
}

View File

@ -0,0 +1,93 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Package descriptor provides functions for obtaining protocol buffer
// descriptors for generated Go types.
//
// These functions cannot go in package proto because they depend on the
// generated protobuf descriptor messages, which themselves depend on proto.
package descriptor
import (
"bytes"
"compress/gzip"
"fmt"
"io/ioutil"
"github.com/golang/protobuf/proto"
protobuf "github.com/golang/protobuf/protoc-gen-go/descriptor"
)
// extractFile extracts a FileDescriptorProto from a gzip'd buffer.
func extractFile(gz []byte) (*protobuf.FileDescriptorProto, error) {
r, err := gzip.NewReader(bytes.NewReader(gz))
if err != nil {
return nil, fmt.Errorf("failed to open gzip reader: %v", err)
}
defer r.Close()
b, err := ioutil.ReadAll(r)
if err != nil {
return nil, fmt.Errorf("failed to uncompress descriptor: %v", err)
}
fd := new(protobuf.FileDescriptorProto)
if err := proto.Unmarshal(b, fd); err != nil {
return nil, fmt.Errorf("malformed FileDescriptorProto: %v", err)
}
return fd, nil
}
// Message is a proto.Message with a method to return its descriptor.
//
// Message types generated by the protocol compiler always satisfy
// the Message interface.
type Message interface {
proto.Message
Descriptor() ([]byte, []int)
}
// ForMessage returns a FileDescriptorProto and a DescriptorProto from within it
// describing the given message.
func ForMessage(msg Message) (fd *protobuf.FileDescriptorProto, md *protobuf.DescriptorProto) {
gz, path := msg.Descriptor()
fd, err := extractFile(gz)
if err != nil {
panic(fmt.Sprintf("invalid FileDescriptorProto for %T: %v", msg, err))
}
md = fd.MessageType[path[0]]
for _, i := range path[1:] {
md = md.NestedType[i]
}
return fd, md
}

View File

@ -0,0 +1,32 @@
package descriptor_test
import (
"fmt"
"testing"
"github.com/golang/protobuf/descriptor"
tpb "github.com/golang/protobuf/proto/testdata"
protobuf "github.com/golang/protobuf/protoc-gen-go/descriptor"
)
func TestMessage(t *testing.T) {
var msg *protobuf.DescriptorProto
fd, md := descriptor.ForMessage(msg)
if pkg, want := fd.GetPackage(), "google.protobuf"; pkg != want {
t.Errorf("descriptor.ForMessage(%T).GetPackage() = %q; want %q", msg, pkg, want)
}
if name, want := md.GetName(), "DescriptorProto"; name != want {
t.Fatalf("descriptor.ForMessage(%T).GetName() = %q; want %q", msg, name, want)
}
}
func Example_Options() {
var msg *tpb.MyMessageSet
_, md := descriptor.ForMessage(msg)
if md.GetOptions().GetMessageSetWireFormat() {
fmt.Printf("%v uses option message_set_wire_format.\n", md.GetName())
}
// Output:
// MyMessageSet uses option message_set_wire_format.
}

1083
vendor/github.com/golang/protobuf/jsonpb/jsonpb.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

896
vendor/github.com/golang/protobuf/jsonpb/jsonpb_test.go generated vendored Normal file
View File

@ -0,0 +1,896 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package jsonpb
import (
"bytes"
"encoding/json"
"io"
"math"
"reflect"
"strings"
"testing"
"github.com/golang/protobuf/proto"
pb "github.com/golang/protobuf/jsonpb/jsonpb_test_proto"
proto3pb "github.com/golang/protobuf/proto/proto3_proto"
"github.com/golang/protobuf/ptypes"
anypb "github.com/golang/protobuf/ptypes/any"
durpb "github.com/golang/protobuf/ptypes/duration"
stpb "github.com/golang/protobuf/ptypes/struct"
tspb "github.com/golang/protobuf/ptypes/timestamp"
wpb "github.com/golang/protobuf/ptypes/wrappers"
)
var (
marshaler = Marshaler{}
marshalerAllOptions = Marshaler{
Indent: " ",
}
simpleObject = &pb.Simple{
OInt32: proto.Int32(-32),
OInt64: proto.Int64(-6400000000),
OUint32: proto.Uint32(32),
OUint64: proto.Uint64(6400000000),
OSint32: proto.Int32(-13),
OSint64: proto.Int64(-2600000000),
OFloat: proto.Float32(3.14),
ODouble: proto.Float64(6.02214179e23),
OBool: proto.Bool(true),
OString: proto.String("hello \"there\""),
OBytes: []byte("beep boop"),
}
simpleObjectJSON = `{` +
`"oBool":true,` +
`"oInt32":-32,` +
`"oInt64":"-6400000000",` +
`"oUint32":32,` +
`"oUint64":"6400000000",` +
`"oSint32":-13,` +
`"oSint64":"-2600000000",` +
`"oFloat":3.14,` +
`"oDouble":6.02214179e+23,` +
`"oString":"hello \"there\"",` +
`"oBytes":"YmVlcCBib29w"` +
`}`
simpleObjectPrettyJSON = `{
"oBool": true,
"oInt32": -32,
"oInt64": "-6400000000",
"oUint32": 32,
"oUint64": "6400000000",
"oSint32": -13,
"oSint64": "-2600000000",
"oFloat": 3.14,
"oDouble": 6.02214179e+23,
"oString": "hello \"there\"",
"oBytes": "YmVlcCBib29w"
}`
repeatsObject = &pb.Repeats{
RBool: []bool{true, false, true},
RInt32: []int32{-3, -4, -5},
RInt64: []int64{-123456789, -987654321},
RUint32: []uint32{1, 2, 3},
RUint64: []uint64{6789012345, 3456789012},
RSint32: []int32{-1, -2, -3},
RSint64: []int64{-6789012345, -3456789012},
RFloat: []float32{3.14, 6.28},
RDouble: []float64{299792458 * 1e20, 6.62606957e-34},
RString: []string{"happy", "days"},
RBytes: [][]byte{[]byte("skittles"), []byte("m&m's")},
}
repeatsObjectJSON = `{` +
`"rBool":[true,false,true],` +
`"rInt32":[-3,-4,-5],` +
`"rInt64":["-123456789","-987654321"],` +
`"rUint32":[1,2,3],` +
`"rUint64":["6789012345","3456789012"],` +
`"rSint32":[-1,-2,-3],` +
`"rSint64":["-6789012345","-3456789012"],` +
`"rFloat":[3.14,6.28],` +
`"rDouble":[2.99792458e+28,6.62606957e-34],` +
`"rString":["happy","days"],` +
`"rBytes":["c2tpdHRsZXM=","bSZtJ3M="]` +
`}`
repeatsObjectPrettyJSON = `{
"rBool": [
true,
false,
true
],
"rInt32": [
-3,
-4,
-5
],
"rInt64": [
"-123456789",
"-987654321"
],
"rUint32": [
1,
2,
3
],
"rUint64": [
"6789012345",
"3456789012"
],
"rSint32": [
-1,
-2,
-3
],
"rSint64": [
"-6789012345",
"-3456789012"
],
"rFloat": [
3.14,
6.28
],
"rDouble": [
2.99792458e+28,
6.62606957e-34
],
"rString": [
"happy",
"days"
],
"rBytes": [
"c2tpdHRsZXM=",
"bSZtJ3M="
]
}`
innerSimple = &pb.Simple{OInt32: proto.Int32(-32)}
innerSimple2 = &pb.Simple{OInt64: proto.Int64(25)}
innerRepeats = &pb.Repeats{RString: []string{"roses", "red"}}
innerRepeats2 = &pb.Repeats{RString: []string{"violets", "blue"}}
complexObject = &pb.Widget{
Color: pb.Widget_GREEN.Enum(),
RColor: []pb.Widget_Color{pb.Widget_RED, pb.Widget_GREEN, pb.Widget_BLUE},
Simple: innerSimple,
RSimple: []*pb.Simple{innerSimple, innerSimple2},
Repeats: innerRepeats,
RRepeats: []*pb.Repeats{innerRepeats, innerRepeats2},
}
complexObjectJSON = `{"color":"GREEN",` +
`"rColor":["RED","GREEN","BLUE"],` +
`"simple":{"oInt32":-32},` +
`"rSimple":[{"oInt32":-32},{"oInt64":"25"}],` +
`"repeats":{"rString":["roses","red"]},` +
`"rRepeats":[{"rString":["roses","red"]},{"rString":["violets","blue"]}]` +
`}`
complexObjectPrettyJSON = `{
"color": "GREEN",
"rColor": [
"RED",
"GREEN",
"BLUE"
],
"simple": {
"oInt32": -32
},
"rSimple": [
{
"oInt32": -32
},
{
"oInt64": "25"
}
],
"repeats": {
"rString": [
"roses",
"red"
]
},
"rRepeats": [
{
"rString": [
"roses",
"red"
]
},
{
"rString": [
"violets",
"blue"
]
}
]
}`
colorPrettyJSON = `{
"color": 2
}`
colorListPrettyJSON = `{
"color": 1000,
"rColor": [
"RED"
]
}`
nummyPrettyJSON = `{
"nummy": {
"1": 2,
"3": 4
}
}`
objjyPrettyJSON = `{
"objjy": {
"1": {
"dub": 1
}
}
}`
realNumber = &pb.Real{Value: proto.Float64(3.14159265359)}
realNumberName = "Pi"
complexNumber = &pb.Complex{Imaginary: proto.Float64(0.5772156649)}
realNumberJSON = `{` +
`"value":3.14159265359,` +
`"[jsonpb.Complex.real_extension]":{"imaginary":0.5772156649},` +
`"[jsonpb.name]":"Pi"` +
`}`
anySimple = &pb.KnownTypes{
An: &anypb.Any{
TypeUrl: "something.example.com/jsonpb.Simple",
Value: []byte{
// &pb.Simple{OBool:true}
1 << 3, 1,
},
},
}
anySimpleJSON = `{"an":{"@type":"something.example.com/jsonpb.Simple","oBool":true}}`
anySimplePrettyJSON = `{
"an": {
"@type": "something.example.com/jsonpb.Simple",
"oBool": true
}
}`
anyWellKnown = &pb.KnownTypes{
An: &anypb.Any{
TypeUrl: "type.googleapis.com/google.protobuf.Duration",
Value: []byte{
// &durpb.Duration{Seconds: 1, Nanos: 212000000 }
1 << 3, 1, // seconds
2 << 3, 0x80, 0xba, 0x8b, 0x65, // nanos
},
},
}
anyWellKnownJSON = `{"an":{"@type":"type.googleapis.com/google.protobuf.Duration","value":"1.212s"}}`
anyWellKnownPrettyJSON = `{
"an": {
"@type": "type.googleapis.com/google.protobuf.Duration",
"value": "1.212s"
}
}`
nonFinites = &pb.NonFinites{
FNan: proto.Float32(float32(math.NaN())),
FPinf: proto.Float32(float32(math.Inf(1))),
FNinf: proto.Float32(float32(math.Inf(-1))),
DNan: proto.Float64(float64(math.NaN())),
DPinf: proto.Float64(float64(math.Inf(1))),
DNinf: proto.Float64(float64(math.Inf(-1))),
}
nonFinitesJSON = `{` +
`"fNan":"NaN",` +
`"fPinf":"Infinity",` +
`"fNinf":"-Infinity",` +
`"dNan":"NaN",` +
`"dPinf":"Infinity",` +
`"dNinf":"-Infinity"` +
`}`
)
func init() {
if err := proto.SetExtension(realNumber, pb.E_Name, &realNumberName); err != nil {
panic(err)
}
if err := proto.SetExtension(realNumber, pb.E_Complex_RealExtension, complexNumber); err != nil {
panic(err)
}
}
var marshalingTests = []struct {
desc string
marshaler Marshaler
pb proto.Message
json string
}{
{"simple flat object", marshaler, simpleObject, simpleObjectJSON},
{"simple pretty object", marshalerAllOptions, simpleObject, simpleObjectPrettyJSON},
{"non-finite floats fields object", marshaler, nonFinites, nonFinitesJSON},
{"repeated fields flat object", marshaler, repeatsObject, repeatsObjectJSON},
{"repeated fields pretty object", marshalerAllOptions, repeatsObject, repeatsObjectPrettyJSON},
{"nested message/enum flat object", marshaler, complexObject, complexObjectJSON},
{"nested message/enum pretty object", marshalerAllOptions, complexObject, complexObjectPrettyJSON},
{"enum-string flat object", Marshaler{},
&pb.Widget{Color: pb.Widget_BLUE.Enum()}, `{"color":"BLUE"}`},
{"enum-value pretty object", Marshaler{EnumsAsInts: true, Indent: " "},
&pb.Widget{Color: pb.Widget_BLUE.Enum()}, colorPrettyJSON},
{"unknown enum value object", marshalerAllOptions,
&pb.Widget{Color: pb.Widget_Color(1000).Enum(), RColor: []pb.Widget_Color{pb.Widget_RED}}, colorListPrettyJSON},
{"repeated proto3 enum", Marshaler{},
&proto3pb.Message{RFunny: []proto3pb.Message_Humour{
proto3pb.Message_PUNS,
proto3pb.Message_SLAPSTICK,
}},
`{"rFunny":["PUNS","SLAPSTICK"]}`},
{"repeated proto3 enum as int", Marshaler{EnumsAsInts: true},
&proto3pb.Message{RFunny: []proto3pb.Message_Humour{
proto3pb.Message_PUNS,
proto3pb.Message_SLAPSTICK,
}},
`{"rFunny":[1,2]}`},
{"empty value", marshaler, &pb.Simple3{}, `{}`},
{"empty value emitted", Marshaler{EmitDefaults: true}, &pb.Simple3{}, `{"dub":0}`},
{"empty repeated emitted", Marshaler{EmitDefaults: true}, &pb.SimpleSlice3{}, `{"slices":[]}`},
{"empty map emitted", Marshaler{EmitDefaults: true}, &pb.SimpleMap3{}, `{"stringy":{}}`},
{"nested struct null", Marshaler{EmitDefaults: true}, &pb.SimpleNull3{}, `{"simple":null}`},
{"map<int64, int32>", marshaler, &pb.Mappy{Nummy: map[int64]int32{1: 2, 3: 4}}, `{"nummy":{"1":2,"3":4}}`},
{"map<int64, int32>", marshalerAllOptions, &pb.Mappy{Nummy: map[int64]int32{1: 2, 3: 4}}, nummyPrettyJSON},
{"map<string, string>", marshaler,
&pb.Mappy{Strry: map[string]string{`"one"`: "two", "three": "four"}},
`{"strry":{"\"one\"":"two","three":"four"}}`},
{"map<int32, Object>", marshaler,
&pb.Mappy{Objjy: map[int32]*pb.Simple3{1: {Dub: 1}}}, `{"objjy":{"1":{"dub":1}}}`},
{"map<int32, Object>", marshalerAllOptions,
&pb.Mappy{Objjy: map[int32]*pb.Simple3{1: {Dub: 1}}}, objjyPrettyJSON},
{"map<int64, string>", marshaler, &pb.Mappy{Buggy: map[int64]string{1234: "yup"}},
`{"buggy":{"1234":"yup"}}`},
{"map<bool, bool>", marshaler, &pb.Mappy{Booly: map[bool]bool{false: true}}, `{"booly":{"false":true}}`},
// TODO: This is broken.
//{"map<string, enum>", marshaler, &pb.Mappy{Enumy: map[string]pb.Numeral{"XIV": pb.Numeral_ROMAN}}, `{"enumy":{"XIV":"ROMAN"}`},
{"map<string, enum as int>", Marshaler{EnumsAsInts: true}, &pb.Mappy{Enumy: map[string]pb.Numeral{"XIV": pb.Numeral_ROMAN}}, `{"enumy":{"XIV":2}}`},
{"map<int32, bool>", marshaler, &pb.Mappy{S32Booly: map[int32]bool{1: true, 3: false, 10: true, 12: false}}, `{"s32booly":{"1":true,"3":false,"10":true,"12":false}}`},
{"map<int64, bool>", marshaler, &pb.Mappy{S64Booly: map[int64]bool{1: true, 3: false, 10: true, 12: false}}, `{"s64booly":{"1":true,"3":false,"10":true,"12":false}}`},
{"map<uint32, bool>", marshaler, &pb.Mappy{U32Booly: map[uint32]bool{1: true, 3: false, 10: true, 12: false}}, `{"u32booly":{"1":true,"3":false,"10":true,"12":false}}`},
{"map<uint64, bool>", marshaler, &pb.Mappy{U64Booly: map[uint64]bool{1: true, 3: false, 10: true, 12: false}}, `{"u64booly":{"1":true,"3":false,"10":true,"12":false}}`},
{"proto2 map<int64, string>", marshaler, &pb.Maps{MInt64Str: map[int64]string{213: "cat"}},
`{"mInt64Str":{"213":"cat"}}`},
{"proto2 map<bool, Object>", marshaler,
&pb.Maps{MBoolSimple: map[bool]*pb.Simple{true: {OInt32: proto.Int32(1)}}},
`{"mBoolSimple":{"true":{"oInt32":1}}}`},
{"oneof, not set", marshaler, &pb.MsgWithOneof{}, `{}`},
{"oneof, set", marshaler, &pb.MsgWithOneof{Union: &pb.MsgWithOneof_Title{"Grand Poobah"}}, `{"title":"Grand Poobah"}`},
{"force orig_name", Marshaler{OrigName: true}, &pb.Simple{OInt32: proto.Int32(4)},
`{"o_int32":4}`},
{"proto2 extension", marshaler, realNumber, realNumberJSON},
{"Any with message", marshaler, anySimple, anySimpleJSON},
{"Any with message and indent", marshalerAllOptions, anySimple, anySimplePrettyJSON},
{"Any with WKT", marshaler, anyWellKnown, anyWellKnownJSON},
{"Any with WKT and indent", marshalerAllOptions, anyWellKnown, anyWellKnownPrettyJSON},
{"Duration", marshaler, &pb.KnownTypes{Dur: &durpb.Duration{Seconds: 3}}, `{"dur":"3.000s"}`},
{"Struct", marshaler, &pb.KnownTypes{St: &stpb.Struct{
Fields: map[string]*stpb.Value{
"one": {Kind: &stpb.Value_StringValue{"loneliest number"}},
"two": {Kind: &stpb.Value_NullValue{stpb.NullValue_NULL_VALUE}},
},
}}, `{"st":{"one":"loneliest number","two":null}}`},
{"empty ListValue", marshaler, &pb.KnownTypes{Lv: &stpb.ListValue{}}, `{"lv":[]}`},
{"basic ListValue", marshaler, &pb.KnownTypes{Lv: &stpb.ListValue{Values: []*stpb.Value{
{Kind: &stpb.Value_StringValue{"x"}},
{Kind: &stpb.Value_NullValue{}},
{Kind: &stpb.Value_NumberValue{3}},
{Kind: &stpb.Value_BoolValue{true}},
}}}, `{"lv":["x",null,3,true]}`},
{"Timestamp", marshaler, &pb.KnownTypes{Ts: &tspb.Timestamp{Seconds: 14e8, Nanos: 21e6}}, `{"ts":"2014-05-13T16:53:20.021Z"}`},
{"number Value", marshaler, &pb.KnownTypes{Val: &stpb.Value{Kind: &stpb.Value_NumberValue{1}}}, `{"val":1}`},
{"null Value", marshaler, &pb.KnownTypes{Val: &stpb.Value{Kind: &stpb.Value_NullValue{stpb.NullValue_NULL_VALUE}}}, `{"val":null}`},
{"string number value", marshaler, &pb.KnownTypes{Val: &stpb.Value{Kind: &stpb.Value_StringValue{"9223372036854775807"}}}, `{"val":"9223372036854775807"}`},
{"list of lists Value", marshaler, &pb.KnownTypes{Val: &stpb.Value{
Kind: &stpb.Value_ListValue{&stpb.ListValue{
Values: []*stpb.Value{
{Kind: &stpb.Value_StringValue{"x"}},
{Kind: &stpb.Value_ListValue{&stpb.ListValue{
Values: []*stpb.Value{
{Kind: &stpb.Value_ListValue{&stpb.ListValue{
Values: []*stpb.Value{{Kind: &stpb.Value_StringValue{"y"}}},
}}},
{Kind: &stpb.Value_StringValue{"z"}},
},
}}},
},
}},
}}, `{"val":["x",[["y"],"z"]]}`},
{"DoubleValue", marshaler, &pb.KnownTypes{Dbl: &wpb.DoubleValue{Value: 1.2}}, `{"dbl":1.2}`},
{"FloatValue", marshaler, &pb.KnownTypes{Flt: &wpb.FloatValue{Value: 1.2}}, `{"flt":1.2}`},
{"Int64Value", marshaler, &pb.KnownTypes{I64: &wpb.Int64Value{Value: -3}}, `{"i64":"-3"}`},
{"UInt64Value", marshaler, &pb.KnownTypes{U64: &wpb.UInt64Value{Value: 3}}, `{"u64":"3"}`},
{"Int32Value", marshaler, &pb.KnownTypes{I32: &wpb.Int32Value{Value: -4}}, `{"i32":-4}`},
{"UInt32Value", marshaler, &pb.KnownTypes{U32: &wpb.UInt32Value{Value: 4}}, `{"u32":4}`},
{"BoolValue", marshaler, &pb.KnownTypes{Bool: &wpb.BoolValue{Value: true}}, `{"bool":true}`},
{"StringValue", marshaler, &pb.KnownTypes{Str: &wpb.StringValue{Value: "plush"}}, `{"str":"plush"}`},
{"BytesValue", marshaler, &pb.KnownTypes{Bytes: &wpb.BytesValue{Value: []byte("wow")}}, `{"bytes":"d293"}`},
}
func TestMarshaling(t *testing.T) {
for _, tt := range marshalingTests {
json, err := tt.marshaler.MarshalToString(tt.pb)
if err != nil {
t.Errorf("%s: marshaling error: %v", tt.desc, err)
} else if tt.json != json {
t.Errorf("%s: got [%v] want [%v]", tt.desc, json, tt.json)
}
}
}
func TestMarshalJSONPBMarshaler(t *testing.T) {
rawJson := `{ "foo": "bar", "baz": [0, 1, 2, 3] }`
msg := dynamicMessage{rawJson: rawJson}
str, err := new(Marshaler).MarshalToString(&msg)
if err != nil {
t.Errorf("an unexpected error occurred when marshalling JSONPBMarshaler: %v", err)
}
if str != rawJson {
t.Errorf("marshalling JSON produced incorrect output: got %s, wanted %s", str, rawJson)
}
}
func TestMarshalAnyJSONPBMarshaler(t *testing.T) {
msg := dynamicMessage{rawJson: `{ "foo": "bar", "baz": [0, 1, 2, 3] }`}
a, err := ptypes.MarshalAny(&msg)
if err != nil {
t.Errorf("an unexpected error occurred when marshalling to Any: %v", err)
}
str, err := new(Marshaler).MarshalToString(a)
if err != nil {
t.Errorf("an unexpected error occurred when marshalling Any to JSON: %v", err)
}
// after custom marshaling, it's round-tripped through JSON decoding/encoding already,
// so the keys are sorted, whitespace is compacted, and "@type" key has been added
expected := `{"@type":"type.googleapis.com/` + dynamicMessageName + `","baz":[0,1,2,3],"foo":"bar"}`
if str != expected {
t.Errorf("marshalling JSON produced incorrect output: got %s, wanted %s", str, expected)
}
}
var unmarshalingTests = []struct {
desc string
unmarshaler Unmarshaler
json string
pb proto.Message
}{
{"simple flat object", Unmarshaler{}, simpleObjectJSON, simpleObject},
{"simple pretty object", Unmarshaler{}, simpleObjectPrettyJSON, simpleObject},
{"repeated fields flat object", Unmarshaler{}, repeatsObjectJSON, repeatsObject},
{"repeated fields pretty object", Unmarshaler{}, repeatsObjectPrettyJSON, repeatsObject},
{"nested message/enum flat object", Unmarshaler{}, complexObjectJSON, complexObject},
{"nested message/enum pretty object", Unmarshaler{}, complexObjectPrettyJSON, complexObject},
{"enum-string object", Unmarshaler{}, `{"color":"BLUE"}`, &pb.Widget{Color: pb.Widget_BLUE.Enum()}},
{"enum-value object", Unmarshaler{}, "{\n \"color\": 2\n}", &pb.Widget{Color: pb.Widget_BLUE.Enum()}},
{"unknown field with allowed option", Unmarshaler{AllowUnknownFields: true}, `{"unknown": "foo"}`, new(pb.Simple)},
{"proto3 enum string", Unmarshaler{}, `{"hilarity":"PUNS"}`, &proto3pb.Message{Hilarity: proto3pb.Message_PUNS}},
{"proto3 enum value", Unmarshaler{}, `{"hilarity":1}`, &proto3pb.Message{Hilarity: proto3pb.Message_PUNS}},
{"unknown enum value object",
Unmarshaler{},
"{\n \"color\": 1000,\n \"r_color\": [\n \"RED\"\n ]\n}",
&pb.Widget{Color: pb.Widget_Color(1000).Enum(), RColor: []pb.Widget_Color{pb.Widget_RED}}},
{"repeated proto3 enum", Unmarshaler{}, `{"rFunny":["PUNS","SLAPSTICK"]}`,
&proto3pb.Message{RFunny: []proto3pb.Message_Humour{
proto3pb.Message_PUNS,
proto3pb.Message_SLAPSTICK,
}}},
{"repeated proto3 enum as int", Unmarshaler{}, `{"rFunny":[1,2]}`,
&proto3pb.Message{RFunny: []proto3pb.Message_Humour{
proto3pb.Message_PUNS,
proto3pb.Message_SLAPSTICK,
}}},
{"repeated proto3 enum as mix of strings and ints", Unmarshaler{}, `{"rFunny":["PUNS",2]}`,
&proto3pb.Message{RFunny: []proto3pb.Message_Humour{
proto3pb.Message_PUNS,
proto3pb.Message_SLAPSTICK,
}}},
{"unquoted int64 object", Unmarshaler{}, `{"oInt64":-314}`, &pb.Simple{OInt64: proto.Int64(-314)}},
{"unquoted uint64 object", Unmarshaler{}, `{"oUint64":123}`, &pb.Simple{OUint64: proto.Uint64(123)}},
{"NaN", Unmarshaler{}, `{"oDouble":"NaN"}`, &pb.Simple{ODouble: proto.Float64(math.NaN())}},
{"Inf", Unmarshaler{}, `{"oFloat":"Infinity"}`, &pb.Simple{OFloat: proto.Float32(float32(math.Inf(1)))}},
{"-Inf", Unmarshaler{}, `{"oDouble":"-Infinity"}`, &pb.Simple{ODouble: proto.Float64(math.Inf(-1))}},
{"map<int64, int32>", Unmarshaler{}, `{"nummy":{"1":2,"3":4}}`, &pb.Mappy{Nummy: map[int64]int32{1: 2, 3: 4}}},
{"map<string, string>", Unmarshaler{}, `{"strry":{"\"one\"":"two","three":"four"}}`, &pb.Mappy{Strry: map[string]string{`"one"`: "two", "three": "four"}}},
{"map<int32, Object>", Unmarshaler{}, `{"objjy":{"1":{"dub":1}}}`, &pb.Mappy{Objjy: map[int32]*pb.Simple3{1: {Dub: 1}}}},
{"proto2 extension", Unmarshaler{}, realNumberJSON, realNumber},
{"Any with message", Unmarshaler{}, anySimpleJSON, anySimple},
{"Any with message and indent", Unmarshaler{}, anySimplePrettyJSON, anySimple},
{"Any with WKT", Unmarshaler{}, anyWellKnownJSON, anyWellKnown},
{"Any with WKT and indent", Unmarshaler{}, anyWellKnownPrettyJSON, anyWellKnown},
// TODO: This is broken.
//{"map<string, enum>", Unmarshaler{}, `{"enumy":{"XIV":"ROMAN"}`, &pb.Mappy{Enumy: map[string]pb.Numeral{"XIV": pb.Numeral_ROMAN}}},
{"map<string, enum as int>", Unmarshaler{}, `{"enumy":{"XIV":2}}`, &pb.Mappy{Enumy: map[string]pb.Numeral{"XIV": pb.Numeral_ROMAN}}},
{"oneof", Unmarshaler{}, `{"salary":31000}`, &pb.MsgWithOneof{Union: &pb.MsgWithOneof_Salary{31000}}},
{"oneof spec name", Unmarshaler{}, `{"Country":"Australia"}`, &pb.MsgWithOneof{Union: &pb.MsgWithOneof_Country{"Australia"}}},
{"oneof orig_name", Unmarshaler{}, `{"Country":"Australia"}`, &pb.MsgWithOneof{Union: &pb.MsgWithOneof_Country{"Australia"}}},
{"oneof spec name2", Unmarshaler{}, `{"homeAddress":"Australia"}`, &pb.MsgWithOneof{Union: &pb.MsgWithOneof_HomeAddress{"Australia"}}},
{"oneof orig_name2", Unmarshaler{}, `{"home_address":"Australia"}`, &pb.MsgWithOneof{Union: &pb.MsgWithOneof_HomeAddress{"Australia"}}},
{"orig_name input", Unmarshaler{}, `{"o_bool":true}`, &pb.Simple{OBool: proto.Bool(true)}},
{"camelName input", Unmarshaler{}, `{"oBool":true}`, &pb.Simple{OBool: proto.Bool(true)}},
{"Duration", Unmarshaler{}, `{"dur":"3.000s"}`, &pb.KnownTypes{Dur: &durpb.Duration{Seconds: 3}}},
{"null Duration", Unmarshaler{}, `{"dur":null}`, &pb.KnownTypes{Dur: nil}},
{"Timestamp", Unmarshaler{}, `{"ts":"2014-05-13T16:53:20.021Z"}`, &pb.KnownTypes{Ts: &tspb.Timestamp{Seconds: 14e8, Nanos: 21e6}}},
{"PreEpochTimestamp", Unmarshaler{}, `{"ts":"1969-12-31T23:59:58.999999995Z"}`, &pb.KnownTypes{Ts: &tspb.Timestamp{Seconds: -2, Nanos: 999999995}}},
{"ZeroTimeTimestamp", Unmarshaler{}, `{"ts":"0001-01-01T00:00:00Z"}`, &pb.KnownTypes{Ts: &tspb.Timestamp{Seconds: -62135596800, Nanos: 0}}},
{"null Timestamp", Unmarshaler{}, `{"ts":null}`, &pb.KnownTypes{Ts: nil}},
{"null Struct", Unmarshaler{}, `{"st": null}`, &pb.KnownTypes{St: nil}},
{"empty Struct", Unmarshaler{}, `{"st": {}}`, &pb.KnownTypes{St: &stpb.Struct{}}},
{"basic Struct", Unmarshaler{}, `{"st": {"a": "x", "b": null, "c": 3, "d": true}}`, &pb.KnownTypes{St: &stpb.Struct{Fields: map[string]*stpb.Value{
"a": {Kind: &stpb.Value_StringValue{"x"}},
"b": {Kind: &stpb.Value_NullValue{}},
"c": {Kind: &stpb.Value_NumberValue{3}},
"d": {Kind: &stpb.Value_BoolValue{true}},
}}}},
{"nested Struct", Unmarshaler{}, `{"st": {"a": {"b": 1, "c": [{"d": true}, "f"]}}}`, &pb.KnownTypes{St: &stpb.Struct{Fields: map[string]*stpb.Value{
"a": {Kind: &stpb.Value_StructValue{&stpb.Struct{Fields: map[string]*stpb.Value{
"b": {Kind: &stpb.Value_NumberValue{1}},
"c": {Kind: &stpb.Value_ListValue{&stpb.ListValue{Values: []*stpb.Value{
{Kind: &stpb.Value_StructValue{&stpb.Struct{Fields: map[string]*stpb.Value{"d": {Kind: &stpb.Value_BoolValue{true}}}}}},
{Kind: &stpb.Value_StringValue{"f"}},
}}}},
}}}},
}}}},
{"null ListValue", Unmarshaler{}, `{"lv": null}`, &pb.KnownTypes{Lv: nil}},
{"empty ListValue", Unmarshaler{}, `{"lv": []}`, &pb.KnownTypes{Lv: &stpb.ListValue{}}},
{"basic ListValue", Unmarshaler{}, `{"lv": ["x", null, 3, true]}`, &pb.KnownTypes{Lv: &stpb.ListValue{Values: []*stpb.Value{
{Kind: &stpb.Value_StringValue{"x"}},
{Kind: &stpb.Value_NullValue{}},
{Kind: &stpb.Value_NumberValue{3}},
{Kind: &stpb.Value_BoolValue{true}},
}}}},
{"number Value", Unmarshaler{}, `{"val":1}`, &pb.KnownTypes{Val: &stpb.Value{Kind: &stpb.Value_NumberValue{1}}}},
{"null Value", Unmarshaler{}, `{"val":null}`, &pb.KnownTypes{Val: &stpb.Value{Kind: &stpb.Value_NullValue{stpb.NullValue_NULL_VALUE}}}},
{"bool Value", Unmarshaler{}, `{"val":true}`, &pb.KnownTypes{Val: &stpb.Value{Kind: &stpb.Value_BoolValue{true}}}},
{"string Value", Unmarshaler{}, `{"val":"x"}`, &pb.KnownTypes{Val: &stpb.Value{Kind: &stpb.Value_StringValue{"x"}}}},
{"string number value", Unmarshaler{}, `{"val":"9223372036854775807"}`, &pb.KnownTypes{Val: &stpb.Value{Kind: &stpb.Value_StringValue{"9223372036854775807"}}}},
{"list of lists Value", Unmarshaler{}, `{"val":["x", [["y"], "z"]]}`, &pb.KnownTypes{Val: &stpb.Value{
Kind: &stpb.Value_ListValue{&stpb.ListValue{
Values: []*stpb.Value{
{Kind: &stpb.Value_StringValue{"x"}},
{Kind: &stpb.Value_ListValue{&stpb.ListValue{
Values: []*stpb.Value{
{Kind: &stpb.Value_ListValue{&stpb.ListValue{
Values: []*stpb.Value{{Kind: &stpb.Value_StringValue{"y"}}},
}}},
{Kind: &stpb.Value_StringValue{"z"}},
},
}}},
},
}}}}},
{"DoubleValue", Unmarshaler{}, `{"dbl":1.2}`, &pb.KnownTypes{Dbl: &wpb.DoubleValue{Value: 1.2}}},
{"FloatValue", Unmarshaler{}, `{"flt":1.2}`, &pb.KnownTypes{Flt: &wpb.FloatValue{Value: 1.2}}},
{"Int64Value", Unmarshaler{}, `{"i64":"-3"}`, &pb.KnownTypes{I64: &wpb.Int64Value{Value: -3}}},
{"UInt64Value", Unmarshaler{}, `{"u64":"3"}`, &pb.KnownTypes{U64: &wpb.UInt64Value{Value: 3}}},
{"Int32Value", Unmarshaler{}, `{"i32":-4}`, &pb.KnownTypes{I32: &wpb.Int32Value{Value: -4}}},
{"UInt32Value", Unmarshaler{}, `{"u32":4}`, &pb.KnownTypes{U32: &wpb.UInt32Value{Value: 4}}},
{"BoolValue", Unmarshaler{}, `{"bool":true}`, &pb.KnownTypes{Bool: &wpb.BoolValue{Value: true}}},
{"StringValue", Unmarshaler{}, `{"str":"plush"}`, &pb.KnownTypes{Str: &wpb.StringValue{Value: "plush"}}},
{"BytesValue", Unmarshaler{}, `{"bytes":"d293"}`, &pb.KnownTypes{Bytes: &wpb.BytesValue{Value: []byte("wow")}}},
// Ensure that `null` as a value ends up with a nil pointer instead of a [type]Value struct.
{"null DoubleValue", Unmarshaler{}, `{"dbl":null}`, &pb.KnownTypes{Dbl: nil}},
{"null FloatValue", Unmarshaler{}, `{"flt":null}`, &pb.KnownTypes{Flt: nil}},
{"null Int64Value", Unmarshaler{}, `{"i64":null}`, &pb.KnownTypes{I64: nil}},
{"null UInt64Value", Unmarshaler{}, `{"u64":null}`, &pb.KnownTypes{U64: nil}},
{"null Int32Value", Unmarshaler{}, `{"i32":null}`, &pb.KnownTypes{I32: nil}},
{"null UInt32Value", Unmarshaler{}, `{"u32":null}`, &pb.KnownTypes{U32: nil}},
{"null BoolValue", Unmarshaler{}, `{"bool":null}`, &pb.KnownTypes{Bool: nil}},
{"null StringValue", Unmarshaler{}, `{"str":null}`, &pb.KnownTypes{Str: nil}},
{"null BytesValue", Unmarshaler{}, `{"bytes":null}`, &pb.KnownTypes{Bytes: nil}},
}
func TestUnmarshaling(t *testing.T) {
for _, tt := range unmarshalingTests {
// Make a new instance of the type of our expected object.
p := reflect.New(reflect.TypeOf(tt.pb).Elem()).Interface().(proto.Message)
err := tt.unmarshaler.Unmarshal(strings.NewReader(tt.json), p)
if err != nil {
t.Errorf("%s: %v", tt.desc, err)
continue
}
// For easier diffs, compare text strings of the protos.
exp := proto.MarshalTextString(tt.pb)
act := proto.MarshalTextString(p)
if string(exp) != string(act) {
t.Errorf("%s: got [%s] want [%s]", tt.desc, act, exp)
}
}
}
func TestUnmarshalNullArray(t *testing.T) {
var repeats pb.Repeats
if err := UnmarshalString(`{"rBool":null}`, &repeats); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(repeats, pb.Repeats{}) {
t.Errorf("got non-nil fields in [%#v]", repeats)
}
}
func TestUnmarshalNullObject(t *testing.T) {
var maps pb.Maps
if err := UnmarshalString(`{"mInt64Str":null}`, &maps); err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(maps, pb.Maps{}) {
t.Errorf("got non-nil fields in [%#v]", maps)
}
}
func TestUnmarshalNext(t *testing.T) {
// We only need to check against a few, not all of them.
tests := unmarshalingTests[:5]
// Create a buffer with many concatenated JSON objects.
var b bytes.Buffer
for _, tt := range tests {
b.WriteString(tt.json)
}
dec := json.NewDecoder(&b)
for _, tt := range tests {
// Make a new instance of the type of our expected object.
p := reflect.New(reflect.TypeOf(tt.pb).Elem()).Interface().(proto.Message)
err := tt.unmarshaler.UnmarshalNext(dec, p)
if err != nil {
t.Errorf("%s: %v", tt.desc, err)
continue
}
// For easier diffs, compare text strings of the protos.
exp := proto.MarshalTextString(tt.pb)
act := proto.MarshalTextString(p)
if string(exp) != string(act) {
t.Errorf("%s: got [%s] want [%s]", tt.desc, act, exp)
}
}
p := &pb.Simple{}
err := new(Unmarshaler).UnmarshalNext(dec, p)
if err != io.EOF {
t.Errorf("eof: got %v, expected io.EOF", err)
}
}
var unmarshalingShouldError = []struct {
desc string
in string
pb proto.Message
}{
{"a value", "666", new(pb.Simple)},
{"gibberish", "{adskja123;l23=-=", new(pb.Simple)},
{"unknown field", `{"unknown": "foo"}`, new(pb.Simple)},
{"unknown enum name", `{"hilarity":"DAVE"}`, new(proto3pb.Message)},
}
func TestUnmarshalingBadInput(t *testing.T) {
for _, tt := range unmarshalingShouldError {
err := UnmarshalString(tt.in, tt.pb)
if err == nil {
t.Errorf("an error was expected when parsing %q instead of an object", tt.desc)
}
}
}
type funcResolver func(turl string) (proto.Message, error)
func (fn funcResolver) Resolve(turl string) (proto.Message, error) {
return fn(turl)
}
func TestAnyWithCustomResolver(t *testing.T) {
var resolvedTypeUrls []string
resolver := funcResolver(func(turl string) (proto.Message, error) {
resolvedTypeUrls = append(resolvedTypeUrls, turl)
return new(pb.Simple), nil
})
msg := &pb.Simple{
OBytes: []byte{1, 2, 3, 4},
OBool: proto.Bool(true),
OString: proto.String("foobar"),
OInt64: proto.Int64(1020304),
}
msgBytes, err := proto.Marshal(msg)
if err != nil {
t.Errorf("an unexpected error occurred when marshaling message: %v", err)
}
// make an Any with a type URL that won't resolve w/out custom resolver
any := &anypb.Any{
TypeUrl: "https://foobar.com/some.random.MessageKind",
Value: msgBytes,
}
m := Marshaler{AnyResolver: resolver}
js, err := m.MarshalToString(any)
if err != nil {
t.Errorf("an unexpected error occurred when marshaling any to JSON: %v", err)
}
if len(resolvedTypeUrls) != 1 {
t.Errorf("custom resolver was not invoked during marshaling")
} else if resolvedTypeUrls[0] != "https://foobar.com/some.random.MessageKind" {
t.Errorf("custom resolver was invoked with wrong URL: got %q, wanted %q", resolvedTypeUrls[0], "https://foobar.com/some.random.MessageKind")
}
wanted := `{"@type":"https://foobar.com/some.random.MessageKind","oBool":true,"oInt64":"1020304","oString":"foobar","oBytes":"AQIDBA=="}`
if js != wanted {
t.Errorf("marshalling JSON produced incorrect output: got %s, wanted %s", js, wanted)
}
u := Unmarshaler{AnyResolver: resolver}
roundTrip := &anypb.Any{}
err = u.Unmarshal(bytes.NewReader([]byte(js)), roundTrip)
if err != nil {
t.Errorf("an unexpected error occurred when unmarshaling any from JSON: %v", err)
}
if len(resolvedTypeUrls) != 2 {
t.Errorf("custom resolver was not invoked during marshaling")
} else if resolvedTypeUrls[1] != "https://foobar.com/some.random.MessageKind" {
t.Errorf("custom resolver was invoked with wrong URL: got %q, wanted %q", resolvedTypeUrls[1], "https://foobar.com/some.random.MessageKind")
}
if !proto.Equal(any, roundTrip) {
t.Errorf("message contents not set correctly after unmarshalling JSON: got %s, wanted %s", roundTrip, any)
}
}
func TestUnmarshalJSONPBUnmarshaler(t *testing.T) {
rawJson := `{ "foo": "bar", "baz": [0, 1, 2, 3] }`
var msg dynamicMessage
if err := Unmarshal(strings.NewReader(rawJson), &msg); err != nil {
t.Errorf("an unexpected error occurred when parsing into JSONPBUnmarshaler: %v", err)
}
if msg.rawJson != rawJson {
t.Errorf("message contents not set correctly after unmarshalling JSON: got %s, wanted %s", msg.rawJson, rawJson)
}
}
func TestUnmarshalNullWithJSONPBUnmarshaler(t *testing.T) {
rawJson := `{"stringField":null}`
var ptrFieldMsg ptrFieldMessage
if err := Unmarshal(strings.NewReader(rawJson), &ptrFieldMsg); err != nil {
t.Errorf("unmarshal error: %v", err)
}
want := ptrFieldMessage{StringField: &stringField{IsSet: true, StringValue: "null"}}
if !proto.Equal(&ptrFieldMsg, &want) {
t.Errorf("unmarshal result StringField: got %v, want %v", ptrFieldMsg, want)
}
}
func TestUnmarshalAnyJSONPBUnmarshaler(t *testing.T) {
rawJson := `{ "@type": "blah.com/` + dynamicMessageName + `", "foo": "bar", "baz": [0, 1, 2, 3] }`
var got anypb.Any
if err := Unmarshal(strings.NewReader(rawJson), &got); err != nil {
t.Errorf("an unexpected error occurred when parsing into JSONPBUnmarshaler: %v", err)
}
dm := &dynamicMessage{rawJson: `{"baz":[0,1,2,3],"foo":"bar"}`}
var want anypb.Any
if b, err := proto.Marshal(dm); err != nil {
t.Errorf("an unexpected error occurred when marshaling message: %v", err)
} else {
want.TypeUrl = "blah.com/" + dynamicMessageName
want.Value = b
}
if !proto.Equal(&got, &want) {
t.Errorf("message contents not set correctly after unmarshalling JSON: got %s, wanted %s", got, want)
}
}
const (
dynamicMessageName = "google.protobuf.jsonpb.testing.dynamicMessage"
)
func init() {
// we register the custom type below so that we can use it in Any types
proto.RegisterType((*dynamicMessage)(nil), dynamicMessageName)
}
type ptrFieldMessage struct {
StringField *stringField `protobuf:"bytes,1,opt,name=stringField"`
}
func (m *ptrFieldMessage) Reset() {
}
func (m *ptrFieldMessage) String() string {
return m.StringField.StringValue
}
func (m *ptrFieldMessage) ProtoMessage() {
}
type stringField struct {
IsSet bool `protobuf:"varint,1,opt,name=isSet"`
StringValue string `protobuf:"bytes,2,opt,name=stringValue"`
}
func (s *stringField) Reset() {
}
func (s *stringField) String() string {
return s.StringValue
}
func (s *stringField) ProtoMessage() {
}
func (s *stringField) UnmarshalJSONPB(jum *Unmarshaler, js []byte) error {
s.IsSet = true
s.StringValue = string(js)
return nil
}
// dynamicMessage implements protobuf.Message but is not a normal generated message type.
// It provides implementations of JSONPBMarshaler and JSONPBUnmarshaler for JSON support.
type dynamicMessage struct {
rawJson string `protobuf:"bytes,1,opt,name=rawJson"`
}
func (m *dynamicMessage) Reset() {
m.rawJson = "{}"
}
func (m *dynamicMessage) String() string {
return m.rawJson
}
func (m *dynamicMessage) ProtoMessage() {
}
func (m *dynamicMessage) MarshalJSONPB(jm *Marshaler) ([]byte, error) {
return []byte(m.rawJson), nil
}
func (m *dynamicMessage) UnmarshalJSONPB(jum *Unmarshaler, js []byte) error {
m.rawJson = string(js)
return nil
}

View File

@ -0,0 +1,33 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2015 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
regenerate:
protoc --go_out=Mgoogle/protobuf/any.proto=github.com/golang/protobuf/ptypes/any,Mgoogle/protobuf/duration.proto=github.com/golang/protobuf/ptypes/duration,Mgoogle/protobuf/struct.proto=github.com/golang/protobuf/ptypes/struct,Mgoogle/protobuf/timestamp.proto=github.com/golang/protobuf/ptypes/timestamp,Mgoogle/protobuf/wrappers.proto=github.com/golang/protobuf/ptypes/wrappers:. *.proto

View File

@ -0,0 +1,266 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: more_test_objects.proto
/*
Package jsonpb is a generated protocol buffer package.
It is generated from these files:
more_test_objects.proto
test_objects.proto
It has these top-level messages:
Simple3
SimpleSlice3
SimpleMap3
SimpleNull3
Mappy
Simple
NonFinites
Repeats
Widget
Maps
MsgWithOneof
Real
Complex
KnownTypes
*/
package jsonpb
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
type Numeral int32
const (
Numeral_UNKNOWN Numeral = 0
Numeral_ARABIC Numeral = 1
Numeral_ROMAN Numeral = 2
)
var Numeral_name = map[int32]string{
0: "UNKNOWN",
1: "ARABIC",
2: "ROMAN",
}
var Numeral_value = map[string]int32{
"UNKNOWN": 0,
"ARABIC": 1,
"ROMAN": 2,
}
func (x Numeral) String() string {
return proto.EnumName(Numeral_name, int32(x))
}
func (Numeral) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
type Simple3 struct {
Dub float64 `protobuf:"fixed64,1,opt,name=dub" json:"dub,omitempty"`
}
func (m *Simple3) Reset() { *m = Simple3{} }
func (m *Simple3) String() string { return proto.CompactTextString(m) }
func (*Simple3) ProtoMessage() {}
func (*Simple3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Simple3) GetDub() float64 {
if m != nil {
return m.Dub
}
return 0
}
type SimpleSlice3 struct {
Slices []string `protobuf:"bytes,1,rep,name=slices" json:"slices,omitempty"`
}
func (m *SimpleSlice3) Reset() { *m = SimpleSlice3{} }
func (m *SimpleSlice3) String() string { return proto.CompactTextString(m) }
func (*SimpleSlice3) ProtoMessage() {}
func (*SimpleSlice3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *SimpleSlice3) GetSlices() []string {
if m != nil {
return m.Slices
}
return nil
}
type SimpleMap3 struct {
Stringy map[string]string `protobuf:"bytes,1,rep,name=stringy" json:"stringy,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
}
func (m *SimpleMap3) Reset() { *m = SimpleMap3{} }
func (m *SimpleMap3) String() string { return proto.CompactTextString(m) }
func (*SimpleMap3) ProtoMessage() {}
func (*SimpleMap3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *SimpleMap3) GetStringy() map[string]string {
if m != nil {
return m.Stringy
}
return nil
}
type SimpleNull3 struct {
Simple *Simple3 `protobuf:"bytes,1,opt,name=simple" json:"simple,omitempty"`
}
func (m *SimpleNull3) Reset() { *m = SimpleNull3{} }
func (m *SimpleNull3) String() string { return proto.CompactTextString(m) }
func (*SimpleNull3) ProtoMessage() {}
func (*SimpleNull3) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *SimpleNull3) GetSimple() *Simple3 {
if m != nil {
return m.Simple
}
return nil
}
type Mappy struct {
Nummy map[int64]int32 `protobuf:"bytes,1,rep,name=nummy" json:"nummy,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
Strry map[string]string `protobuf:"bytes,2,rep,name=strry" json:"strry,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Objjy map[int32]*Simple3 `protobuf:"bytes,3,rep,name=objjy" json:"objjy,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Buggy map[int64]string `protobuf:"bytes,4,rep,name=buggy" json:"buggy,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Booly map[bool]bool `protobuf:"bytes,5,rep,name=booly" json:"booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
Enumy map[string]Numeral `protobuf:"bytes,6,rep,name=enumy" json:"enumy,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"varint,2,opt,name=value,enum=jsonpb.Numeral"`
S32Booly map[int32]bool `protobuf:"bytes,7,rep,name=s32booly" json:"s32booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
S64Booly map[int64]bool `protobuf:"bytes,8,rep,name=s64booly" json:"s64booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
U32Booly map[uint32]bool `protobuf:"bytes,9,rep,name=u32booly" json:"u32booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
U64Booly map[uint64]bool `protobuf:"bytes,10,rep,name=u64booly" json:"u64booly,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
}
func (m *Mappy) Reset() { *m = Mappy{} }
func (m *Mappy) String() string { return proto.CompactTextString(m) }
func (*Mappy) ProtoMessage() {}
func (*Mappy) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *Mappy) GetNummy() map[int64]int32 {
if m != nil {
return m.Nummy
}
return nil
}
func (m *Mappy) GetStrry() map[string]string {
if m != nil {
return m.Strry
}
return nil
}
func (m *Mappy) GetObjjy() map[int32]*Simple3 {
if m != nil {
return m.Objjy
}
return nil
}
func (m *Mappy) GetBuggy() map[int64]string {
if m != nil {
return m.Buggy
}
return nil
}
func (m *Mappy) GetBooly() map[bool]bool {
if m != nil {
return m.Booly
}
return nil
}
func (m *Mappy) GetEnumy() map[string]Numeral {
if m != nil {
return m.Enumy
}
return nil
}
func (m *Mappy) GetS32Booly() map[int32]bool {
if m != nil {
return m.S32Booly
}
return nil
}
func (m *Mappy) GetS64Booly() map[int64]bool {
if m != nil {
return m.S64Booly
}
return nil
}
func (m *Mappy) GetU32Booly() map[uint32]bool {
if m != nil {
return m.U32Booly
}
return nil
}
func (m *Mappy) GetU64Booly() map[uint64]bool {
if m != nil {
return m.U64Booly
}
return nil
}
func init() {
proto.RegisterType((*Simple3)(nil), "jsonpb.Simple3")
proto.RegisterType((*SimpleSlice3)(nil), "jsonpb.SimpleSlice3")
proto.RegisterType((*SimpleMap3)(nil), "jsonpb.SimpleMap3")
proto.RegisterType((*SimpleNull3)(nil), "jsonpb.SimpleNull3")
proto.RegisterType((*Mappy)(nil), "jsonpb.Mappy")
proto.RegisterEnum("jsonpb.Numeral", Numeral_name, Numeral_value)
}
func init() { proto.RegisterFile("more_test_objects.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 526 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x94, 0xdd, 0x6b, 0xdb, 0x3c,
0x14, 0x87, 0x5f, 0x27, 0xf5, 0xd7, 0x49, 0xfb, 0x2e, 0x88, 0xb1, 0x99, 0xf4, 0x62, 0xc5, 0xb0,
0xad, 0x0c, 0xe6, 0x8b, 0x78, 0x74, 0x5d, 0x77, 0x95, 0x8e, 0x5e, 0x94, 0x11, 0x07, 0x1c, 0xc2,
0x2e, 0x4b, 0xdc, 0x99, 0x90, 0xcc, 0x5f, 0xd8, 0xd6, 0xc0, 0xd7, 0xfb, 0xbb, 0x07, 0xe3, 0x48,
0x72, 0x2d, 0x07, 0x85, 0x6c, 0x77, 0x52, 0x7e, 0xcf, 0xe3, 0x73, 0x24, 0x1d, 0x02, 0x2f, 0xd3,
0xbc, 0x8c, 0x1f, 0xea, 0xb8, 0xaa, 0x1f, 0xf2, 0x68, 0x17, 0x3f, 0xd6, 0x95, 0x57, 0x94, 0x79,
0x9d, 0x13, 0x63, 0x57, 0xe5, 0x59, 0x11, 0xb9, 0xe7, 0x60, 0x2e, 0xb7, 0x69, 0x91, 0xc4, 0x3e,
0x19, 0xc3, 0xf0, 0x3b, 0x8d, 0x1c, 0xed, 0x42, 0xbb, 0xd4, 0x42, 0x5c, 0xba, 0x6f, 0xe0, 0x94,
0x87, 0xcb, 0x64, 0xfb, 0x18, 0xfb, 0xe4, 0x05, 0x18, 0x15, 0xae, 0x2a, 0x47, 0xbb, 0x18, 0x5e,
0xda, 0xa1, 0xd8, 0xb9, 0xbf, 0x34, 0x00, 0x0e, 0xce, 0xd7, 0x85, 0x4f, 0x3e, 0x81, 0x59, 0xd5,
0xe5, 0x36, 0xdb, 0x34, 0x8c, 0x1b, 0x4d, 0x5f, 0x79, 0xbc, 0x9a, 0xd7, 0x41, 0xde, 0x92, 0x13,
0x77, 0x59, 0x5d, 0x36, 0x61, 0xcb, 0x4f, 0x6e, 0xe0, 0x54, 0x0e, 0xb0, 0xa7, 0x1f, 0x71, 0xc3,
0x7a, 0xb2, 0x43, 0x5c, 0x92, 0xe7, 0xa0, 0xff, 0x5c, 0x27, 0x34, 0x76, 0x06, 0xec, 0x37, 0xbe,
0xb9, 0x19, 0x5c, 0x6b, 0xee, 0x15, 0x8c, 0xf8, 0xf7, 0x03, 0x9a, 0x24, 0x3e, 0x79, 0x0b, 0x46,
0xc5, 0xb6, 0xcc, 0x1e, 0x4d, 0x9f, 0xf5, 0x9b, 0xf0, 0x43, 0x11, 0xbb, 0xbf, 0x2d, 0xd0, 0xe7,
0xeb, 0xa2, 0x68, 0x88, 0x07, 0x7a, 0x46, 0xd3, 0xb4, 0x6d, 0xdb, 0x69, 0x0d, 0x96, 0x7a, 0x01,
0x46, 0xbc, 0x5f, 0x8e, 0x21, 0x5f, 0xd5, 0x65, 0xd9, 0x38, 0x03, 0x15, 0xbf, 0xc4, 0x48, 0xf0,
0x0c, 0x43, 0x3e, 0x8f, 0x76, 0xbb, 0xc6, 0x19, 0xaa, 0xf8, 0x05, 0x46, 0x82, 0x67, 0x18, 0xf2,
0x11, 0xdd, 0x6c, 0x1a, 0xe7, 0x44, 0xc5, 0xdf, 0x62, 0x24, 0x78, 0x86, 0x31, 0x3e, 0xcf, 0x93,
0xc6, 0xd1, 0x95, 0x3c, 0x46, 0x2d, 0x8f, 0x6b, 0xe4, 0xe3, 0x8c, 0xa6, 0x8d, 0x63, 0xa8, 0xf8,
0x3b, 0x8c, 0x04, 0xcf, 0x30, 0xf2, 0x11, 0xac, 0xca, 0x9f, 0xf2, 0x12, 0x26, 0x53, 0xce, 0xf7,
0x8e, 0x2c, 0x52, 0x6e, 0x3d, 0xc1, 0x4c, 0xbc, 0xfa, 0xc0, 0x45, 0x4b, 0x29, 0x8a, 0xb4, 0x15,
0xc5, 0x16, 0x45, 0xda, 0x56, 0xb4, 0x55, 0xe2, 0xaa, 0x5f, 0x91, 0x4a, 0x15, 0x69, 0x5b, 0x11,
0x94, 0x62, 0xbf, 0x62, 0x0b, 0x4f, 0xae, 0x01, 0xba, 0x87, 0x96, 0xe7, 0x6f, 0xa8, 0x98, 0x3f,
0x5d, 0x9a, 0x3f, 0x34, 0xbb, 0x27, 0xff, 0x97, 0xc9, 0x9d, 0xdc, 0x03, 0x74, 0x8f, 0x2f, 0x9b,
0x3a, 0x37, 0x5f, 0xcb, 0xa6, 0x62, 0x92, 0xfb, 0x4d, 0x74, 0x73, 0x71, 0xac, 0x7d, 0x7b, 0xdf,
0x7c, 0xba, 0x10, 0xd9, 0xb4, 0x14, 0xa6, 0xb5, 0xd7, 0x7e, 0x37, 0x2b, 0x8a, 0x83, 0xf7, 0xda,
0xff, 0xbf, 0x6b, 0x3f, 0xa0, 0x69, 0x5c, 0xae, 0x13, 0xf9, 0x53, 0x9f, 0xe1, 0xac, 0x37, 0x43,
0x8a, 0xcb, 0x38, 0xdc, 0x07, 0xca, 0xf2, 0xab, 0x1e, 0x3b, 0xfe, 0xbe, 0xbc, 0x3a, 0x54, 0xf9,
0xec, 0x6f, 0xe4, 0x43, 0x95, 0x4f, 0x8e, 0xc8, 0xef, 0xde, 0x83, 0x29, 0x6e, 0x82, 0x8c, 0xc0,
0x5c, 0x05, 0x5f, 0x83, 0xc5, 0xb7, 0x60, 0xfc, 0x1f, 0x01, 0x30, 0x66, 0xe1, 0xec, 0xf6, 0xfe,
0xcb, 0x58, 0x23, 0x36, 0xe8, 0xe1, 0x62, 0x3e, 0x0b, 0xc6, 0x83, 0xc8, 0x60, 0x7f, 0xe0, 0xfe,
0x9f, 0x00, 0x00, 0x00, 0xff, 0xff, 0xdc, 0x84, 0x34, 0xaf, 0xdb, 0x05, 0x00, 0x00,
}

View File

@ -0,0 +1,69 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax = "proto3";
package jsonpb;
message Simple3 {
double dub = 1;
}
message SimpleSlice3 {
repeated string slices = 1;
}
message SimpleMap3 {
map<string,string> stringy = 1;
}
message SimpleNull3 {
Simple3 simple = 1;
}
enum Numeral {
UNKNOWN = 0;
ARABIC = 1;
ROMAN = 2;
}
message Mappy {
map<int64, int32> nummy = 1;
map<string, string> strry = 2;
map<int32, Simple3> objjy = 3;
map<int64, string> buggy = 4;
map<bool, bool> booly = 5;
map<string, Numeral> enumy = 6;
map<int32, bool> s32booly = 7;
map<int64, bool> s64booly = 8;
map<uint32, bool> u32booly = 9;
map<uint64, bool> u64booly = 10;
}

View File

@ -0,0 +1,852 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: test_objects.proto
package jsonpb
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/ptypes/any"
import google_protobuf1 "github.com/golang/protobuf/ptypes/duration"
import google_protobuf2 "github.com/golang/protobuf/ptypes/struct"
import google_protobuf3 "github.com/golang/protobuf/ptypes/timestamp"
import google_protobuf4 "github.com/golang/protobuf/ptypes/wrappers"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
type Widget_Color int32
const (
Widget_RED Widget_Color = 0
Widget_GREEN Widget_Color = 1
Widget_BLUE Widget_Color = 2
)
var Widget_Color_name = map[int32]string{
0: "RED",
1: "GREEN",
2: "BLUE",
}
var Widget_Color_value = map[string]int32{
"RED": 0,
"GREEN": 1,
"BLUE": 2,
}
func (x Widget_Color) Enum() *Widget_Color {
p := new(Widget_Color)
*p = x
return p
}
func (x Widget_Color) String() string {
return proto.EnumName(Widget_Color_name, int32(x))
}
func (x *Widget_Color) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(Widget_Color_value, data, "Widget_Color")
if err != nil {
return err
}
*x = Widget_Color(value)
return nil
}
func (Widget_Color) EnumDescriptor() ([]byte, []int) { return fileDescriptor1, []int{3, 0} }
// Test message for holding primitive types.
type Simple struct {
OBool *bool `protobuf:"varint,1,opt,name=o_bool,json=oBool" json:"o_bool,omitempty"`
OInt32 *int32 `protobuf:"varint,2,opt,name=o_int32,json=oInt32" json:"o_int32,omitempty"`
OInt64 *int64 `protobuf:"varint,3,opt,name=o_int64,json=oInt64" json:"o_int64,omitempty"`
OUint32 *uint32 `protobuf:"varint,4,opt,name=o_uint32,json=oUint32" json:"o_uint32,omitempty"`
OUint64 *uint64 `protobuf:"varint,5,opt,name=o_uint64,json=oUint64" json:"o_uint64,omitempty"`
OSint32 *int32 `protobuf:"zigzag32,6,opt,name=o_sint32,json=oSint32" json:"o_sint32,omitempty"`
OSint64 *int64 `protobuf:"zigzag64,7,opt,name=o_sint64,json=oSint64" json:"o_sint64,omitempty"`
OFloat *float32 `protobuf:"fixed32,8,opt,name=o_float,json=oFloat" json:"o_float,omitempty"`
ODouble *float64 `protobuf:"fixed64,9,opt,name=o_double,json=oDouble" json:"o_double,omitempty"`
OString *string `protobuf:"bytes,10,opt,name=o_string,json=oString" json:"o_string,omitempty"`
OBytes []byte `protobuf:"bytes,11,opt,name=o_bytes,json=oBytes" json:"o_bytes,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Simple) Reset() { *m = Simple{} }
func (m *Simple) String() string { return proto.CompactTextString(m) }
func (*Simple) ProtoMessage() {}
func (*Simple) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{0} }
func (m *Simple) GetOBool() bool {
if m != nil && m.OBool != nil {
return *m.OBool
}
return false
}
func (m *Simple) GetOInt32() int32 {
if m != nil && m.OInt32 != nil {
return *m.OInt32
}
return 0
}
func (m *Simple) GetOInt64() int64 {
if m != nil && m.OInt64 != nil {
return *m.OInt64
}
return 0
}
func (m *Simple) GetOUint32() uint32 {
if m != nil && m.OUint32 != nil {
return *m.OUint32
}
return 0
}
func (m *Simple) GetOUint64() uint64 {
if m != nil && m.OUint64 != nil {
return *m.OUint64
}
return 0
}
func (m *Simple) GetOSint32() int32 {
if m != nil && m.OSint32 != nil {
return *m.OSint32
}
return 0
}
func (m *Simple) GetOSint64() int64 {
if m != nil && m.OSint64 != nil {
return *m.OSint64
}
return 0
}
func (m *Simple) GetOFloat() float32 {
if m != nil && m.OFloat != nil {
return *m.OFloat
}
return 0
}
func (m *Simple) GetODouble() float64 {
if m != nil && m.ODouble != nil {
return *m.ODouble
}
return 0
}
func (m *Simple) GetOString() string {
if m != nil && m.OString != nil {
return *m.OString
}
return ""
}
func (m *Simple) GetOBytes() []byte {
if m != nil {
return m.OBytes
}
return nil
}
// Test message for holding special non-finites primitives.
type NonFinites struct {
FNan *float32 `protobuf:"fixed32,1,opt,name=f_nan,json=fNan" json:"f_nan,omitempty"`
FPinf *float32 `protobuf:"fixed32,2,opt,name=f_pinf,json=fPinf" json:"f_pinf,omitempty"`
FNinf *float32 `protobuf:"fixed32,3,opt,name=f_ninf,json=fNinf" json:"f_ninf,omitempty"`
DNan *float64 `protobuf:"fixed64,4,opt,name=d_nan,json=dNan" json:"d_nan,omitempty"`
DPinf *float64 `protobuf:"fixed64,5,opt,name=d_pinf,json=dPinf" json:"d_pinf,omitempty"`
DNinf *float64 `protobuf:"fixed64,6,opt,name=d_ninf,json=dNinf" json:"d_ninf,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *NonFinites) Reset() { *m = NonFinites{} }
func (m *NonFinites) String() string { return proto.CompactTextString(m) }
func (*NonFinites) ProtoMessage() {}
func (*NonFinites) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{1} }
func (m *NonFinites) GetFNan() float32 {
if m != nil && m.FNan != nil {
return *m.FNan
}
return 0
}
func (m *NonFinites) GetFPinf() float32 {
if m != nil && m.FPinf != nil {
return *m.FPinf
}
return 0
}
func (m *NonFinites) GetFNinf() float32 {
if m != nil && m.FNinf != nil {
return *m.FNinf
}
return 0
}
func (m *NonFinites) GetDNan() float64 {
if m != nil && m.DNan != nil {
return *m.DNan
}
return 0
}
func (m *NonFinites) GetDPinf() float64 {
if m != nil && m.DPinf != nil {
return *m.DPinf
}
return 0
}
func (m *NonFinites) GetDNinf() float64 {
if m != nil && m.DNinf != nil {
return *m.DNinf
}
return 0
}
// Test message for holding repeated primitives.
type Repeats struct {
RBool []bool `protobuf:"varint,1,rep,name=r_bool,json=rBool" json:"r_bool,omitempty"`
RInt32 []int32 `protobuf:"varint,2,rep,name=r_int32,json=rInt32" json:"r_int32,omitempty"`
RInt64 []int64 `protobuf:"varint,3,rep,name=r_int64,json=rInt64" json:"r_int64,omitempty"`
RUint32 []uint32 `protobuf:"varint,4,rep,name=r_uint32,json=rUint32" json:"r_uint32,omitempty"`
RUint64 []uint64 `protobuf:"varint,5,rep,name=r_uint64,json=rUint64" json:"r_uint64,omitempty"`
RSint32 []int32 `protobuf:"zigzag32,6,rep,name=r_sint32,json=rSint32" json:"r_sint32,omitempty"`
RSint64 []int64 `protobuf:"zigzag64,7,rep,name=r_sint64,json=rSint64" json:"r_sint64,omitempty"`
RFloat []float32 `protobuf:"fixed32,8,rep,name=r_float,json=rFloat" json:"r_float,omitempty"`
RDouble []float64 `protobuf:"fixed64,9,rep,name=r_double,json=rDouble" json:"r_double,omitempty"`
RString []string `protobuf:"bytes,10,rep,name=r_string,json=rString" json:"r_string,omitempty"`
RBytes [][]byte `protobuf:"bytes,11,rep,name=r_bytes,json=rBytes" json:"r_bytes,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Repeats) Reset() { *m = Repeats{} }
func (m *Repeats) String() string { return proto.CompactTextString(m) }
func (*Repeats) ProtoMessage() {}
func (*Repeats) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{2} }
func (m *Repeats) GetRBool() []bool {
if m != nil {
return m.RBool
}
return nil
}
func (m *Repeats) GetRInt32() []int32 {
if m != nil {
return m.RInt32
}
return nil
}
func (m *Repeats) GetRInt64() []int64 {
if m != nil {
return m.RInt64
}
return nil
}
func (m *Repeats) GetRUint32() []uint32 {
if m != nil {
return m.RUint32
}
return nil
}
func (m *Repeats) GetRUint64() []uint64 {
if m != nil {
return m.RUint64
}
return nil
}
func (m *Repeats) GetRSint32() []int32 {
if m != nil {
return m.RSint32
}
return nil
}
func (m *Repeats) GetRSint64() []int64 {
if m != nil {
return m.RSint64
}
return nil
}
func (m *Repeats) GetRFloat() []float32 {
if m != nil {
return m.RFloat
}
return nil
}
func (m *Repeats) GetRDouble() []float64 {
if m != nil {
return m.RDouble
}
return nil
}
func (m *Repeats) GetRString() []string {
if m != nil {
return m.RString
}
return nil
}
func (m *Repeats) GetRBytes() [][]byte {
if m != nil {
return m.RBytes
}
return nil
}
// Test message for holding enums and nested messages.
type Widget struct {
Color *Widget_Color `protobuf:"varint,1,opt,name=color,enum=jsonpb.Widget_Color" json:"color,omitempty"`
RColor []Widget_Color `protobuf:"varint,2,rep,name=r_color,json=rColor,enum=jsonpb.Widget_Color" json:"r_color,omitempty"`
Simple *Simple `protobuf:"bytes,10,opt,name=simple" json:"simple,omitempty"`
RSimple []*Simple `protobuf:"bytes,11,rep,name=r_simple,json=rSimple" json:"r_simple,omitempty"`
Repeats *Repeats `protobuf:"bytes,20,opt,name=repeats" json:"repeats,omitempty"`
RRepeats []*Repeats `protobuf:"bytes,21,rep,name=r_repeats,json=rRepeats" json:"r_repeats,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Widget) Reset() { *m = Widget{} }
func (m *Widget) String() string { return proto.CompactTextString(m) }
func (*Widget) ProtoMessage() {}
func (*Widget) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{3} }
func (m *Widget) GetColor() Widget_Color {
if m != nil && m.Color != nil {
return *m.Color
}
return Widget_RED
}
func (m *Widget) GetRColor() []Widget_Color {
if m != nil {
return m.RColor
}
return nil
}
func (m *Widget) GetSimple() *Simple {
if m != nil {
return m.Simple
}
return nil
}
func (m *Widget) GetRSimple() []*Simple {
if m != nil {
return m.RSimple
}
return nil
}
func (m *Widget) GetRepeats() *Repeats {
if m != nil {
return m.Repeats
}
return nil
}
func (m *Widget) GetRRepeats() []*Repeats {
if m != nil {
return m.RRepeats
}
return nil
}
type Maps struct {
MInt64Str map[int64]string `protobuf:"bytes,1,rep,name=m_int64_str,json=mInt64Str" json:"m_int64_str,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
MBoolSimple map[bool]*Simple `protobuf:"bytes,2,rep,name=m_bool_simple,json=mBoolSimple" json:"m_bool_simple,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Maps) Reset() { *m = Maps{} }
func (m *Maps) String() string { return proto.CompactTextString(m) }
func (*Maps) ProtoMessage() {}
func (*Maps) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{4} }
func (m *Maps) GetMInt64Str() map[int64]string {
if m != nil {
return m.MInt64Str
}
return nil
}
func (m *Maps) GetMBoolSimple() map[bool]*Simple {
if m != nil {
return m.MBoolSimple
}
return nil
}
type MsgWithOneof struct {
// Types that are valid to be assigned to Union:
// *MsgWithOneof_Title
// *MsgWithOneof_Salary
// *MsgWithOneof_Country
// *MsgWithOneof_HomeAddress
Union isMsgWithOneof_Union `protobuf_oneof:"union"`
XXX_unrecognized []byte `json:"-"`
}
func (m *MsgWithOneof) Reset() { *m = MsgWithOneof{} }
func (m *MsgWithOneof) String() string { return proto.CompactTextString(m) }
func (*MsgWithOneof) ProtoMessage() {}
func (*MsgWithOneof) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{5} }
type isMsgWithOneof_Union interface {
isMsgWithOneof_Union()
}
type MsgWithOneof_Title struct {
Title string `protobuf:"bytes,1,opt,name=title,oneof"`
}
type MsgWithOneof_Salary struct {
Salary int64 `protobuf:"varint,2,opt,name=salary,oneof"`
}
type MsgWithOneof_Country struct {
Country string `protobuf:"bytes,3,opt,name=Country,oneof"`
}
type MsgWithOneof_HomeAddress struct {
HomeAddress string `protobuf:"bytes,4,opt,name=home_address,json=homeAddress,oneof"`
}
func (*MsgWithOneof_Title) isMsgWithOneof_Union() {}
func (*MsgWithOneof_Salary) isMsgWithOneof_Union() {}
func (*MsgWithOneof_Country) isMsgWithOneof_Union() {}
func (*MsgWithOneof_HomeAddress) isMsgWithOneof_Union() {}
func (m *MsgWithOneof) GetUnion() isMsgWithOneof_Union {
if m != nil {
return m.Union
}
return nil
}
func (m *MsgWithOneof) GetTitle() string {
if x, ok := m.GetUnion().(*MsgWithOneof_Title); ok {
return x.Title
}
return ""
}
func (m *MsgWithOneof) GetSalary() int64 {
if x, ok := m.GetUnion().(*MsgWithOneof_Salary); ok {
return x.Salary
}
return 0
}
func (m *MsgWithOneof) GetCountry() string {
if x, ok := m.GetUnion().(*MsgWithOneof_Country); ok {
return x.Country
}
return ""
}
func (m *MsgWithOneof) GetHomeAddress() string {
if x, ok := m.GetUnion().(*MsgWithOneof_HomeAddress); ok {
return x.HomeAddress
}
return ""
}
// XXX_OneofFuncs is for the internal use of the proto package.
func (*MsgWithOneof) XXX_OneofFuncs() (func(msg proto.Message, b *proto.Buffer) error, func(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error), func(msg proto.Message) (n int), []interface{}) {
return _MsgWithOneof_OneofMarshaler, _MsgWithOneof_OneofUnmarshaler, _MsgWithOneof_OneofSizer, []interface{}{
(*MsgWithOneof_Title)(nil),
(*MsgWithOneof_Salary)(nil),
(*MsgWithOneof_Country)(nil),
(*MsgWithOneof_HomeAddress)(nil),
}
}
func _MsgWithOneof_OneofMarshaler(msg proto.Message, b *proto.Buffer) error {
m := msg.(*MsgWithOneof)
// union
switch x := m.Union.(type) {
case *MsgWithOneof_Title:
b.EncodeVarint(1<<3 | proto.WireBytes)
b.EncodeStringBytes(x.Title)
case *MsgWithOneof_Salary:
b.EncodeVarint(2<<3 | proto.WireVarint)
b.EncodeVarint(uint64(x.Salary))
case *MsgWithOneof_Country:
b.EncodeVarint(3<<3 | proto.WireBytes)
b.EncodeStringBytes(x.Country)
case *MsgWithOneof_HomeAddress:
b.EncodeVarint(4<<3 | proto.WireBytes)
b.EncodeStringBytes(x.HomeAddress)
case nil:
default:
return fmt.Errorf("MsgWithOneof.Union has unexpected type %T", x)
}
return nil
}
func _MsgWithOneof_OneofUnmarshaler(msg proto.Message, tag, wire int, b *proto.Buffer) (bool, error) {
m := msg.(*MsgWithOneof)
switch tag {
case 1: // union.title
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeStringBytes()
m.Union = &MsgWithOneof_Title{x}
return true, err
case 2: // union.salary
if wire != proto.WireVarint {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeVarint()
m.Union = &MsgWithOneof_Salary{int64(x)}
return true, err
case 3: // union.Country
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeStringBytes()
m.Union = &MsgWithOneof_Country{x}
return true, err
case 4: // union.home_address
if wire != proto.WireBytes {
return true, proto.ErrInternalBadWireType
}
x, err := b.DecodeStringBytes()
m.Union = &MsgWithOneof_HomeAddress{x}
return true, err
default:
return false, nil
}
}
func _MsgWithOneof_OneofSizer(msg proto.Message) (n int) {
m := msg.(*MsgWithOneof)
// union
switch x := m.Union.(type) {
case *MsgWithOneof_Title:
n += proto.SizeVarint(1<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(len(x.Title)))
n += len(x.Title)
case *MsgWithOneof_Salary:
n += proto.SizeVarint(2<<3 | proto.WireVarint)
n += proto.SizeVarint(uint64(x.Salary))
case *MsgWithOneof_Country:
n += proto.SizeVarint(3<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(len(x.Country)))
n += len(x.Country)
case *MsgWithOneof_HomeAddress:
n += proto.SizeVarint(4<<3 | proto.WireBytes)
n += proto.SizeVarint(uint64(len(x.HomeAddress)))
n += len(x.HomeAddress)
case nil:
default:
panic(fmt.Sprintf("proto: unexpected type %T in oneof", x))
}
return n
}
type Real struct {
Value *float64 `protobuf:"fixed64,1,opt,name=value" json:"value,omitempty"`
proto.XXX_InternalExtensions `json:"-"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Real) Reset() { *m = Real{} }
func (m *Real) String() string { return proto.CompactTextString(m) }
func (*Real) ProtoMessage() {}
func (*Real) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{6} }
var extRange_Real = []proto.ExtensionRange{
{100, 536870911},
}
func (*Real) ExtensionRangeArray() []proto.ExtensionRange {
return extRange_Real
}
func (m *Real) GetValue() float64 {
if m != nil && m.Value != nil {
return *m.Value
}
return 0
}
type Complex struct {
Imaginary *float64 `protobuf:"fixed64,1,opt,name=imaginary" json:"imaginary,omitempty"`
proto.XXX_InternalExtensions `json:"-"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Complex) Reset() { *m = Complex{} }
func (m *Complex) String() string { return proto.CompactTextString(m) }
func (*Complex) ProtoMessage() {}
func (*Complex) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{7} }
var extRange_Complex = []proto.ExtensionRange{
{100, 536870911},
}
func (*Complex) ExtensionRangeArray() []proto.ExtensionRange {
return extRange_Complex
}
func (m *Complex) GetImaginary() float64 {
if m != nil && m.Imaginary != nil {
return *m.Imaginary
}
return 0
}
var E_Complex_RealExtension = &proto.ExtensionDesc{
ExtendedType: (*Real)(nil),
ExtensionType: (*Complex)(nil),
Field: 123,
Name: "jsonpb.Complex.real_extension",
Tag: "bytes,123,opt,name=real_extension,json=realExtension",
Filename: "test_objects.proto",
}
type KnownTypes struct {
An *google_protobuf.Any `protobuf:"bytes,14,opt,name=an" json:"an,omitempty"`
Dur *google_protobuf1.Duration `protobuf:"bytes,1,opt,name=dur" json:"dur,omitempty"`
St *google_protobuf2.Struct `protobuf:"bytes,12,opt,name=st" json:"st,omitempty"`
Ts *google_protobuf3.Timestamp `protobuf:"bytes,2,opt,name=ts" json:"ts,omitempty"`
Lv *google_protobuf2.ListValue `protobuf:"bytes,15,opt,name=lv" json:"lv,omitempty"`
Val *google_protobuf2.Value `protobuf:"bytes,16,opt,name=val" json:"val,omitempty"`
Dbl *google_protobuf4.DoubleValue `protobuf:"bytes,3,opt,name=dbl" json:"dbl,omitempty"`
Flt *google_protobuf4.FloatValue `protobuf:"bytes,4,opt,name=flt" json:"flt,omitempty"`
I64 *google_protobuf4.Int64Value `protobuf:"bytes,5,opt,name=i64" json:"i64,omitempty"`
U64 *google_protobuf4.UInt64Value `protobuf:"bytes,6,opt,name=u64" json:"u64,omitempty"`
I32 *google_protobuf4.Int32Value `protobuf:"bytes,7,opt,name=i32" json:"i32,omitempty"`
U32 *google_protobuf4.UInt32Value `protobuf:"bytes,8,opt,name=u32" json:"u32,omitempty"`
Bool *google_protobuf4.BoolValue `protobuf:"bytes,9,opt,name=bool" json:"bool,omitempty"`
Str *google_protobuf4.StringValue `protobuf:"bytes,10,opt,name=str" json:"str,omitempty"`
Bytes *google_protobuf4.BytesValue `protobuf:"bytes,11,opt,name=bytes" json:"bytes,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *KnownTypes) Reset() { *m = KnownTypes{} }
func (m *KnownTypes) String() string { return proto.CompactTextString(m) }
func (*KnownTypes) ProtoMessage() {}
func (*KnownTypes) Descriptor() ([]byte, []int) { return fileDescriptor1, []int{8} }
func (m *KnownTypes) GetAn() *google_protobuf.Any {
if m != nil {
return m.An
}
return nil
}
func (m *KnownTypes) GetDur() *google_protobuf1.Duration {
if m != nil {
return m.Dur
}
return nil
}
func (m *KnownTypes) GetSt() *google_protobuf2.Struct {
if m != nil {
return m.St
}
return nil
}
func (m *KnownTypes) GetTs() *google_protobuf3.Timestamp {
if m != nil {
return m.Ts
}
return nil
}
func (m *KnownTypes) GetLv() *google_protobuf2.ListValue {
if m != nil {
return m.Lv
}
return nil
}
func (m *KnownTypes) GetVal() *google_protobuf2.Value {
if m != nil {
return m.Val
}
return nil
}
func (m *KnownTypes) GetDbl() *google_protobuf4.DoubleValue {
if m != nil {
return m.Dbl
}
return nil
}
func (m *KnownTypes) GetFlt() *google_protobuf4.FloatValue {
if m != nil {
return m.Flt
}
return nil
}
func (m *KnownTypes) GetI64() *google_protobuf4.Int64Value {
if m != nil {
return m.I64
}
return nil
}
func (m *KnownTypes) GetU64() *google_protobuf4.UInt64Value {
if m != nil {
return m.U64
}
return nil
}
func (m *KnownTypes) GetI32() *google_protobuf4.Int32Value {
if m != nil {
return m.I32
}
return nil
}
func (m *KnownTypes) GetU32() *google_protobuf4.UInt32Value {
if m != nil {
return m.U32
}
return nil
}
func (m *KnownTypes) GetBool() *google_protobuf4.BoolValue {
if m != nil {
return m.Bool
}
return nil
}
func (m *KnownTypes) GetStr() *google_protobuf4.StringValue {
if m != nil {
return m.Str
}
return nil
}
func (m *KnownTypes) GetBytes() *google_protobuf4.BytesValue {
if m != nil {
return m.Bytes
}
return nil
}
var E_Name = &proto.ExtensionDesc{
ExtendedType: (*Real)(nil),
ExtensionType: (*string)(nil),
Field: 124,
Name: "jsonpb.name",
Tag: "bytes,124,opt,name=name",
Filename: "test_objects.proto",
}
func init() {
proto.RegisterType((*Simple)(nil), "jsonpb.Simple")
proto.RegisterType((*NonFinites)(nil), "jsonpb.NonFinites")
proto.RegisterType((*Repeats)(nil), "jsonpb.Repeats")
proto.RegisterType((*Widget)(nil), "jsonpb.Widget")
proto.RegisterType((*Maps)(nil), "jsonpb.Maps")
proto.RegisterType((*MsgWithOneof)(nil), "jsonpb.MsgWithOneof")
proto.RegisterType((*Real)(nil), "jsonpb.Real")
proto.RegisterType((*Complex)(nil), "jsonpb.Complex")
proto.RegisterType((*KnownTypes)(nil), "jsonpb.KnownTypes")
proto.RegisterEnum("jsonpb.Widget_Color", Widget_Color_name, Widget_Color_value)
proto.RegisterExtension(E_Complex_RealExtension)
proto.RegisterExtension(E_Name)
}
func init() { proto.RegisterFile("test_objects.proto", fileDescriptor1) }
var fileDescriptor1 = []byte{
// 1160 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x95, 0x41, 0x73, 0xdb, 0x44,
0x14, 0xc7, 0x23, 0xc9, 0x92, 0xed, 0x75, 0x92, 0x9a, 0x6d, 0xda, 0x2a, 0x26, 0x80, 0xc6, 0x94,
0x22, 0x0a, 0x75, 0x07, 0xc7, 0xe3, 0x61, 0x0a, 0x97, 0xa4, 0x71, 0x29, 0x43, 0x13, 0x98, 0x4d,
0x43, 0x8f, 0x1e, 0x39, 0x5a, 0xbb, 0x2a, 0xf2, 0xae, 0x67, 0x77, 0x95, 0xd4, 0x03, 0x87, 0x9c,
0x39, 0x32, 0x7c, 0x05, 0xf8, 0x08, 0x1c, 0xf8, 0x74, 0xcc, 0xdb, 0x95, 0xac, 0xc4, 0x8e, 0x4f,
0xf1, 0x7b, 0xef, 0xff, 0xfe, 0x59, 0xed, 0x6f, 0x77, 0x1f, 0xc2, 0x8a, 0x4a, 0x35, 0xe4, 0xa3,
0x77, 0xf4, 0x5c, 0xc9, 0xce, 0x4c, 0x70, 0xc5, 0xb1, 0xf7, 0x4e, 0x72, 0x36, 0x1b, 0xb5, 0x76,
0x27, 0x9c, 0x4f, 0x52, 0xfa, 0x54, 0x67, 0x47, 0xd9, 0xf8, 0x69, 0xc4, 0xe6, 0x46, 0xd2, 0xfa,
0x78, 0xb9, 0x14, 0x67, 0x22, 0x52, 0x09, 0x67, 0x79, 0x7d, 0x6f, 0xb9, 0x2e, 0x95, 0xc8, 0xce,
0x55, 0x5e, 0xfd, 0x64, 0xb9, 0xaa, 0x92, 0x29, 0x95, 0x2a, 0x9a, 0xce, 0xd6, 0xd9, 0x5f, 0x8a,
0x68, 0x36, 0xa3, 0x22, 0x5f, 0x61, 0xfb, 0x6f, 0x1b, 0x79, 0xa7, 0xc9, 0x74, 0x96, 0x52, 0x7c,
0x0f, 0x79, 0x7c, 0x38, 0xe2, 0x3c, 0xf5, 0xad, 0xc0, 0x0a, 0x6b, 0xc4, 0xe5, 0x87, 0x9c, 0xa7,
0xf8, 0x01, 0xaa, 0xf2, 0x61, 0xc2, 0xd4, 0x7e, 0xd7, 0xb7, 0x03, 0x2b, 0x74, 0x89, 0xc7, 0x7f,
0x80, 0x68, 0x51, 0xe8, 0xf7, 0x7c, 0x27, 0xb0, 0x42, 0xc7, 0x14, 0xfa, 0x3d, 0xbc, 0x8b, 0x6a,
0x7c, 0x98, 0x99, 0x96, 0x4a, 0x60, 0x85, 0x5b, 0xa4, 0xca, 0xcf, 0x74, 0x58, 0x96, 0xfa, 0x3d,
0xdf, 0x0d, 0xac, 0xb0, 0x92, 0x97, 0x8a, 0x2e, 0x69, 0xba, 0xbc, 0xc0, 0x0a, 0x3f, 0x20, 0x55,
0x7e, 0x7a, 0xad, 0x4b, 0x9a, 0xae, 0x6a, 0x60, 0x85, 0x38, 0x2f, 0xf5, 0x7b, 0x66, 0x11, 0xe3,
0x94, 0x47, 0xca, 0xaf, 0x05, 0x56, 0x68, 0x13, 0x8f, 0xbf, 0x80, 0xc8, 0xf4, 0xc4, 0x3c, 0x1b,
0xa5, 0xd4, 0xaf, 0x07, 0x56, 0x68, 0x91, 0x2a, 0x3f, 0xd2, 0x61, 0x6e, 0xa7, 0x44, 0xc2, 0x26,
0x3e, 0x0a, 0xac, 0xb0, 0x0e, 0x76, 0x3a, 0x34, 0x76, 0xa3, 0xb9, 0xa2, 0xd2, 0x6f, 0x04, 0x56,
0xb8, 0x49, 0x3c, 0x7e, 0x08, 0x51, 0xfb, 0x4f, 0x0b, 0xa1, 0x13, 0xce, 0x5e, 0x24, 0x2c, 0x51,
0x54, 0xe2, 0xbb, 0xc8, 0x1d, 0x0f, 0x59, 0xc4, 0xf4, 0x56, 0xd9, 0xa4, 0x32, 0x3e, 0x89, 0x18,
0x6c, 0xe0, 0x78, 0x38, 0x4b, 0xd8, 0x58, 0x6f, 0x94, 0x4d, 0xdc, 0xf1, 0xcf, 0x09, 0x1b, 0x9b,
0x34, 0x83, 0xb4, 0x93, 0xa7, 0x4f, 0x20, 0x7d, 0x17, 0xb9, 0xb1, 0xb6, 0xa8, 0xe8, 0xd5, 0x55,
0xe2, 0xdc, 0x22, 0x36, 0x16, 0xae, 0xce, 0xba, 0x71, 0x61, 0x11, 0x1b, 0x0b, 0x2f, 0x4f, 0x83,
0x45, 0xfb, 0x1f, 0x1b, 0x55, 0x09, 0x9d, 0xd1, 0x48, 0x49, 0x90, 0x88, 0x82, 0x9e, 0x03, 0xf4,
0x44, 0x41, 0x4f, 0x2c, 0xe8, 0x39, 0x40, 0x4f, 0x2c, 0xe8, 0x89, 0x05, 0x3d, 0x07, 0xe8, 0x89,
0x05, 0x3d, 0x51, 0xd2, 0x73, 0x80, 0x9e, 0x28, 0xe9, 0x89, 0x92, 0x9e, 0x03, 0xf4, 0x44, 0x49,
0x4f, 0x94, 0xf4, 0x1c, 0xa0, 0x27, 0x4e, 0xaf, 0x75, 0x2d, 0xe8, 0x39, 0x40, 0x4f, 0x94, 0xf4,
0xc4, 0x82, 0x9e, 0x03, 0xf4, 0xc4, 0x82, 0x9e, 0x28, 0xe9, 0x39, 0x40, 0x4f, 0x94, 0xf4, 0x44,
0x49, 0xcf, 0x01, 0x7a, 0xa2, 0xa4, 0x27, 0x16, 0xf4, 0x1c, 0xa0, 0x27, 0x0c, 0xbd, 0x7f, 0x6d,
0xe4, 0xbd, 0x49, 0xe2, 0x09, 0x55, 0xf8, 0x31, 0x72, 0xcf, 0x79, 0xca, 0x85, 0x26, 0xb7, 0xdd,
0xdd, 0xe9, 0x98, 0x2b, 0xda, 0x31, 0xe5, 0xce, 0x73, 0xa8, 0x11, 0x23, 0xc1, 0x4f, 0xc0, 0xcf,
0xa8, 0x61, 0xf3, 0xd6, 0xa9, 0x3d, 0xa1, 0xff, 0xe2, 0x47, 0xc8, 0x93, 0xfa, 0x2a, 0xe9, 0x53,
0xd5, 0xe8, 0x6e, 0x17, 0x6a, 0x73, 0xc1, 0x48, 0x5e, 0xc5, 0x5f, 0x98, 0x0d, 0xd1, 0x4a, 0x58,
0xe7, 0xaa, 0x12, 0x36, 0x28, 0x97, 0x56, 0x85, 0x01, 0xec, 0xef, 0x68, 0xcf, 0x3b, 0x85, 0x32,
0xe7, 0x4e, 0x8a, 0x3a, 0xfe, 0x0a, 0xd5, 0xc5, 0xb0, 0x10, 0xdf, 0xd3, 0xb6, 0x2b, 0xe2, 0x9a,
0xc8, 0x7f, 0xb5, 0x3f, 0x43, 0xae, 0x59, 0x74, 0x15, 0x39, 0x64, 0x70, 0xd4, 0xdc, 0xc0, 0x75,
0xe4, 0x7e, 0x4f, 0x06, 0x83, 0x93, 0xa6, 0x85, 0x6b, 0xa8, 0x72, 0xf8, 0xea, 0x6c, 0xd0, 0xb4,
0xdb, 0x7f, 0xd9, 0xa8, 0x72, 0x1c, 0xcd, 0x24, 0xfe, 0x16, 0x35, 0xa6, 0xe6, 0xb8, 0xc0, 0xde,
0xeb, 0x33, 0xd6, 0xe8, 0x7e, 0x58, 0xf8, 0x83, 0xa4, 0x73, 0xac, 0xcf, 0xcf, 0xa9, 0x12, 0x03,
0xa6, 0xc4, 0x9c, 0xd4, 0xa7, 0x45, 0x8c, 0x0f, 0xd0, 0xd6, 0x54, 0x9f, 0xcd, 0xe2, 0xab, 0x6d,
0xdd, 0xfe, 0xd1, 0xcd, 0x76, 0x38, 0xaf, 0xe6, 0xb3, 0x8d, 0x41, 0x63, 0x5a, 0x66, 0x5a, 0xdf,
0xa1, 0xed, 0x9b, 0xfe, 0xb8, 0x89, 0x9c, 0x5f, 0xe9, 0x5c, 0x63, 0x74, 0x08, 0xfc, 0xc4, 0x3b,
0xc8, 0xbd, 0x88, 0xd2, 0x8c, 0xea, 0xeb, 0x57, 0x27, 0x26, 0x78, 0x66, 0x7f, 0x63, 0xb5, 0x4e,
0x50, 0x73, 0xd9, 0xfe, 0x7a, 0x7f, 0xcd, 0xf4, 0x3f, 0xbc, 0xde, 0xbf, 0x0a, 0xa5, 0xf4, 0x6b,
0xff, 0x61, 0xa1, 0xcd, 0x63, 0x39, 0x79, 0x93, 0xa8, 0xb7, 0x3f, 0x31, 0xca, 0xc7, 0xf8, 0x3e,
0x72, 0x55, 0xa2, 0x52, 0xaa, 0xed, 0xea, 0x2f, 0x37, 0x88, 0x09, 0xb1, 0x8f, 0x3c, 0x19, 0xa5,
0x91, 0x98, 0x6b, 0x4f, 0xe7, 0xe5, 0x06, 0xc9, 0x63, 0xdc, 0x42, 0xd5, 0xe7, 0x3c, 0x83, 0x95,
0xe8, 0x67, 0x01, 0x7a, 0x8a, 0x04, 0xfe, 0x14, 0x6d, 0xbe, 0xe5, 0x53, 0x3a, 0x8c, 0xe2, 0x58,
0x50, 0x29, 0xf5, 0x0b, 0x01, 0x82, 0x06, 0x64, 0x0f, 0x4c, 0xf2, 0xb0, 0x8a, 0xdc, 0x8c, 0x25,
0x9c, 0xb5, 0x1f, 0xa1, 0x0a, 0xa1, 0x51, 0x5a, 0x7e, 0xbe, 0x65, 0xde, 0x08, 0x1d, 0x3c, 0xae,
0xd5, 0xe2, 0xe6, 0xd5, 0xd5, 0xd5, 0x95, 0xdd, 0xbe, 0x84, 0xff, 0x08, 0x5f, 0xf2, 0x1e, 0xef,
0xa1, 0x7a, 0x32, 0x8d, 0x26, 0x09, 0x83, 0x95, 0x19, 0x79, 0x99, 0x28, 0x5b, 0xba, 0x47, 0x68,
0x5b, 0xd0, 0x28, 0x1d, 0xd2, 0xf7, 0x8a, 0x32, 0x99, 0x70, 0x86, 0x37, 0xcb, 0x23, 0x15, 0xa5,
0xfe, 0x6f, 0x37, 0xcf, 0x64, 0x6e, 0x4f, 0xb6, 0xa0, 0x69, 0x50, 0xf4, 0xb4, 0xff, 0x73, 0x11,
0xfa, 0x91, 0xf1, 0x4b, 0xf6, 0x7a, 0x3e, 0xa3, 0x12, 0x3f, 0x44, 0x76, 0xc4, 0xfc, 0x6d, 0xdd,
0xba, 0xd3, 0x31, 0xf3, 0xa9, 0x53, 0xcc, 0xa7, 0xce, 0x01, 0x9b, 0x13, 0x3b, 0x62, 0xf8, 0x4b,
0xe4, 0xc4, 0x99, 0xb9, 0xa5, 0x8d, 0xee, 0xee, 0x8a, 0xec, 0x28, 0x9f, 0x92, 0x04, 0x54, 0xf8,
0x73, 0x64, 0x4b, 0xe5, 0x6f, 0x6a, 0xed, 0x83, 0x15, 0xed, 0xa9, 0x9e, 0x98, 0xc4, 0x96, 0x70,
0xfb, 0x6d, 0x25, 0x73, 0xbe, 0xad, 0x15, 0xe1, 0xeb, 0x62, 0x78, 0x12, 0x5b, 0x49, 0xd0, 0xa6,
0x17, 0xfe, 0x9d, 0x35, 0xda, 0x57, 0x89, 0x54, 0xbf, 0xc0, 0x0e, 0x13, 0x3b, 0xbd, 0xc0, 0x21,
0x72, 0x2e, 0xa2, 0xd4, 0x6f, 0x6a, 0xf1, 0xfd, 0x15, 0xb1, 0x11, 0x82, 0x04, 0x77, 0x90, 0x13,
0x8f, 0x52, 0xcd, 0xbc, 0xd1, 0xdd, 0x5b, 0xfd, 0x2e, 0xfd, 0xc8, 0xe5, 0xfa, 0x78, 0x94, 0xe2,
0x27, 0xc8, 0x19, 0xa7, 0x4a, 0x1f, 0x01, 0xb8, 0x70, 0xcb, 0x7a, 0xfd, 0x5c, 0xe6, 0xf2, 0x71,
0xaa, 0x40, 0x9e, 0xe4, 0xb3, 0xf5, 0x36, 0xb9, 0xbe, 0x42, 0xb9, 0x3c, 0xe9, 0xf7, 0x60, 0x35,
0x59, 0xbf, 0xa7, 0xa7, 0xca, 0x6d, 0xab, 0x39, 0xbb, 0xae, 0xcf, 0xfa, 0x3d, 0x6d, 0xbf, 0xdf,
0xd5, 0x43, 0x78, 0x8d, 0xfd, 0x7e, 0xb7, 0xb0, 0xdf, 0xef, 0x6a, 0xfb, 0xfd, 0xae, 0x9e, 0xcc,
0xeb, 0xec, 0x17, 0xfa, 0x4c, 0xeb, 0x2b, 0x7a, 0x84, 0xd5, 0xd7, 0x6c, 0x3a, 0xdc, 0x61, 0x23,
0xd7, 0x3a, 0xf0, 0x87, 0xd7, 0x08, 0xad, 0xf1, 0x37, 0x63, 0x21, 0xf7, 0x97, 0x4a, 0xe0, 0xaf,
0x91, 0x5b, 0x0e, 0xf7, 0xdb, 0x3e, 0x40, 0x8f, 0x0b, 0xd3, 0x60, 0x94, 0xcf, 0x02, 0x54, 0x61,
0xd1, 0x94, 0x2e, 0x1d, 0xfc, 0xdf, 0xf5, 0x0b, 0xa3, 0x2b, 0xff, 0x07, 0x00, 0x00, 0xff, 0xff,
0xd5, 0x39, 0x32, 0x09, 0xf9, 0x09, 0x00, 0x00,
}

View File

@ -0,0 +1,147 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax = "proto2";
import "google/protobuf/any.proto";
import "google/protobuf/duration.proto";
import "google/protobuf/struct.proto";
import "google/protobuf/timestamp.proto";
import "google/protobuf/wrappers.proto";
package jsonpb;
// Test message for holding primitive types.
message Simple {
optional bool o_bool = 1;
optional int32 o_int32 = 2;
optional int64 o_int64 = 3;
optional uint32 o_uint32 = 4;
optional uint64 o_uint64 = 5;
optional sint32 o_sint32 = 6;
optional sint64 o_sint64 = 7;
optional float o_float = 8;
optional double o_double = 9;
optional string o_string = 10;
optional bytes o_bytes = 11;
}
// Test message for holding special non-finites primitives.
message NonFinites {
optional float f_nan = 1;
optional float f_pinf = 2;
optional float f_ninf = 3;
optional double d_nan = 4;
optional double d_pinf = 5;
optional double d_ninf = 6;
}
// Test message for holding repeated primitives.
message Repeats {
repeated bool r_bool = 1;
repeated int32 r_int32 = 2;
repeated int64 r_int64 = 3;
repeated uint32 r_uint32 = 4;
repeated uint64 r_uint64 = 5;
repeated sint32 r_sint32 = 6;
repeated sint64 r_sint64 = 7;
repeated float r_float = 8;
repeated double r_double = 9;
repeated string r_string = 10;
repeated bytes r_bytes = 11;
}
// Test message for holding enums and nested messages.
message Widget {
enum Color {
RED = 0;
GREEN = 1;
BLUE = 2;
};
optional Color color = 1;
repeated Color r_color = 2;
optional Simple simple = 10;
repeated Simple r_simple = 11;
optional Repeats repeats = 20;
repeated Repeats r_repeats = 21;
}
message Maps {
map<int64, string> m_int64_str = 1;
map<bool, Simple> m_bool_simple = 2;
}
message MsgWithOneof {
oneof union {
string title = 1;
int64 salary = 2;
string Country = 3;
string home_address = 4;
}
}
message Real {
optional double value = 1;
extensions 100 to max;
}
extend Real {
optional string name = 124;
}
message Complex {
extend Real {
optional Complex real_extension = 123;
}
optional double imaginary = 1;
extensions 100 to max;
}
message KnownTypes {
optional google.protobuf.Any an = 14;
optional google.protobuf.Duration dur = 1;
optional google.protobuf.Struct st = 12;
optional google.protobuf.Timestamp ts = 2;
optional google.protobuf.ListValue lv = 15;
optional google.protobuf.Value val = 16;
optional google.protobuf.DoubleValue dbl = 3;
optional google.protobuf.FloatValue flt = 4;
optional google.protobuf.Int64Value i64 = 5;
optional google.protobuf.UInt64Value u64 = 6;
optional google.protobuf.Int32Value i32 = 7;
optional google.protobuf.UInt32Value u32 = 8;
optional google.protobuf.BoolValue bool = 9;
optional google.protobuf.StringValue str = 10;
optional google.protobuf.BytesValue bytes = 11;
}

43
vendor/github.com/golang/protobuf/proto/Makefile generated vendored Normal file
View File

@ -0,0 +1,43 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
install:
go install
test: install generate-test-pbs
go test
generate-test-pbs:
make install
make -C testdata
protoc --go_out=Mtestdata/test.proto=github.com/golang/protobuf/proto/testdata,Mgoogle/protobuf/any.proto=github.com/golang/protobuf/ptypes/any:. proto3_proto/proto3.proto
make

2278
vendor/github.com/golang/protobuf/proto/all_test.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

300
vendor/github.com/golang/protobuf/proto/any_test.go generated vendored Normal file
View File

@ -0,0 +1,300 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2016 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto_test
import (
"strings"
"testing"
"github.com/golang/protobuf/proto"
pb "github.com/golang/protobuf/proto/proto3_proto"
testpb "github.com/golang/protobuf/proto/testdata"
anypb "github.com/golang/protobuf/ptypes/any"
)
var (
expandedMarshaler = proto.TextMarshaler{ExpandAny: true}
expandedCompactMarshaler = proto.TextMarshaler{Compact: true, ExpandAny: true}
)
// anyEqual reports whether two messages which may be google.protobuf.Any or may
// contain google.protobuf.Any fields are equal. We can't use proto.Equal for
// comparison, because semantically equivalent messages may be marshaled to
// binary in different tag order. Instead, trust that TextMarshaler with
// ExpandAny option works and compare the text marshaling results.
func anyEqual(got, want proto.Message) bool {
// if messages are proto.Equal, no need to marshal.
if proto.Equal(got, want) {
return true
}
g := expandedMarshaler.Text(got)
w := expandedMarshaler.Text(want)
return g == w
}
type golden struct {
m proto.Message
t, c string
}
var goldenMessages = makeGolden()
func makeGolden() []golden {
nested := &pb.Nested{Bunny: "Monty"}
nb, err := proto.Marshal(nested)
if err != nil {
panic(err)
}
m1 := &pb.Message{
Name: "David",
ResultCount: 47,
Anything: &anypb.Any{TypeUrl: "type.googleapis.com/" + proto.MessageName(nested), Value: nb},
}
m2 := &pb.Message{
Name: "David",
ResultCount: 47,
Anything: &anypb.Any{TypeUrl: "http://[::1]/type.googleapis.com/" + proto.MessageName(nested), Value: nb},
}
m3 := &pb.Message{
Name: "David",
ResultCount: 47,
Anything: &anypb.Any{TypeUrl: `type.googleapis.com/"/` + proto.MessageName(nested), Value: nb},
}
m4 := &pb.Message{
Name: "David",
ResultCount: 47,
Anything: &anypb.Any{TypeUrl: "type.googleapis.com/a/path/" + proto.MessageName(nested), Value: nb},
}
m5 := &anypb.Any{TypeUrl: "type.googleapis.com/" + proto.MessageName(nested), Value: nb}
any1 := &testpb.MyMessage{Count: proto.Int32(47), Name: proto.String("David")}
proto.SetExtension(any1, testpb.E_Ext_More, &testpb.Ext{Data: proto.String("foo")})
proto.SetExtension(any1, testpb.E_Ext_Text, proto.String("bar"))
any1b, err := proto.Marshal(any1)
if err != nil {
panic(err)
}
any2 := &testpb.MyMessage{Count: proto.Int32(42), Bikeshed: testpb.MyMessage_GREEN.Enum(), RepBytes: [][]byte{[]byte("roboto")}}
proto.SetExtension(any2, testpb.E_Ext_More, &testpb.Ext{Data: proto.String("baz")})
any2b, err := proto.Marshal(any2)
if err != nil {
panic(err)
}
m6 := &pb.Message{
Name: "David",
ResultCount: 47,
Anything: &anypb.Any{TypeUrl: "type.googleapis.com/" + proto.MessageName(any1), Value: any1b},
ManyThings: []*anypb.Any{
&anypb.Any{TypeUrl: "type.googleapis.com/" + proto.MessageName(any2), Value: any2b},
&anypb.Any{TypeUrl: "type.googleapis.com/" + proto.MessageName(any1), Value: any1b},
},
}
const (
m1Golden = `
name: "David"
result_count: 47
anything: <
[type.googleapis.com/proto3_proto.Nested]: <
bunny: "Monty"
>
>
`
m2Golden = `
name: "David"
result_count: 47
anything: <
["http://[::1]/type.googleapis.com/proto3_proto.Nested"]: <
bunny: "Monty"
>
>
`
m3Golden = `
name: "David"
result_count: 47
anything: <
["type.googleapis.com/\"/proto3_proto.Nested"]: <
bunny: "Monty"
>
>
`
m4Golden = `
name: "David"
result_count: 47
anything: <
[type.googleapis.com/a/path/proto3_proto.Nested]: <
bunny: "Monty"
>
>
`
m5Golden = `
[type.googleapis.com/proto3_proto.Nested]: <
bunny: "Monty"
>
`
m6Golden = `
name: "David"
result_count: 47
anything: <
[type.googleapis.com/testdata.MyMessage]: <
count: 47
name: "David"
[testdata.Ext.more]: <
data: "foo"
>
[testdata.Ext.text]: "bar"
>
>
many_things: <
[type.googleapis.com/testdata.MyMessage]: <
count: 42
bikeshed: GREEN
rep_bytes: "roboto"
[testdata.Ext.more]: <
data: "baz"
>
>
>
many_things: <
[type.googleapis.com/testdata.MyMessage]: <
count: 47
name: "David"
[testdata.Ext.more]: <
data: "foo"
>
[testdata.Ext.text]: "bar"
>
>
`
)
return []golden{
{m1, strings.TrimSpace(m1Golden) + "\n", strings.TrimSpace(compact(m1Golden)) + " "},
{m2, strings.TrimSpace(m2Golden) + "\n", strings.TrimSpace(compact(m2Golden)) + " "},
{m3, strings.TrimSpace(m3Golden) + "\n", strings.TrimSpace(compact(m3Golden)) + " "},
{m4, strings.TrimSpace(m4Golden) + "\n", strings.TrimSpace(compact(m4Golden)) + " "},
{m5, strings.TrimSpace(m5Golden) + "\n", strings.TrimSpace(compact(m5Golden)) + " "},
{m6, strings.TrimSpace(m6Golden) + "\n", strings.TrimSpace(compact(m6Golden)) + " "},
}
}
func TestMarshalGolden(t *testing.T) {
for _, tt := range goldenMessages {
if got, want := expandedMarshaler.Text(tt.m), tt.t; got != want {
t.Errorf("message %v: got:\n%s\nwant:\n%s", tt.m, got, want)
}
if got, want := expandedCompactMarshaler.Text(tt.m), tt.c; got != want {
t.Errorf("message %v: got:\n`%s`\nwant:\n`%s`", tt.m, got, want)
}
}
}
func TestUnmarshalGolden(t *testing.T) {
for _, tt := range goldenMessages {
want := tt.m
got := proto.Clone(tt.m)
got.Reset()
if err := proto.UnmarshalText(tt.t, got); err != nil {
t.Errorf("failed to unmarshal\n%s\nerror: %v", tt.t, err)
}
if !anyEqual(got, want) {
t.Errorf("message:\n%s\ngot:\n%s\nwant:\n%s", tt.t, got, want)
}
got.Reset()
if err := proto.UnmarshalText(tt.c, got); err != nil {
t.Errorf("failed to unmarshal\n%s\nerror: %v", tt.c, err)
}
if !anyEqual(got, want) {
t.Errorf("message:\n%s\ngot:\n%s\nwant:\n%s", tt.c, got, want)
}
}
}
func TestMarshalUnknownAny(t *testing.T) {
m := &pb.Message{
Anything: &anypb.Any{
TypeUrl: "foo",
Value: []byte("bar"),
},
}
want := `anything: <
type_url: "foo"
value: "bar"
>
`
got := expandedMarshaler.Text(m)
if got != want {
t.Errorf("got\n`%s`\nwant\n`%s`", got, want)
}
}
func TestAmbiguousAny(t *testing.T) {
pb := &anypb.Any{}
err := proto.UnmarshalText(`
type_url: "ttt/proto3_proto.Nested"
value: "\n\x05Monty"
`, pb)
t.Logf("result: %v (error: %v)", expandedMarshaler.Text(pb), err)
if err != nil {
t.Errorf("failed to parse ambiguous Any message: %v", err)
}
}
func TestUnmarshalOverwriteAny(t *testing.T) {
pb := &anypb.Any{}
err := proto.UnmarshalText(`
[type.googleapis.com/a/path/proto3_proto.Nested]: <
bunny: "Monty"
>
[type.googleapis.com/a/path/proto3_proto.Nested]: <
bunny: "Rabbit of Caerbannog"
>
`, pb)
want := `line 7: Any message unpacked multiple times, or "type_url" already set`
if err.Error() != want {
t.Errorf("incorrect error.\nHave: %v\nWant: %v", err.Error(), want)
}
}
func TestUnmarshalAnyMixAndMatch(t *testing.T) {
pb := &anypb.Any{}
err := proto.UnmarshalText(`
value: "\n\x05Monty"
[type.googleapis.com/a/path/proto3_proto.Nested]: <
bunny: "Rabbit of Caerbannog"
>
`, pb)
want := `line 5: Any message unpacked multiple times, or "value" already set`
if err.Error() != want {
t.Errorf("incorrect error.\nHave: %v\nWant: %v", err.Error(), want)
}
}

229
vendor/github.com/golang/protobuf/proto/clone.go generated vendored Normal file
View File

@ -0,0 +1,229 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Protocol buffer deep copy and merge.
// TODO: RawMessage.
package proto
import (
"log"
"reflect"
"strings"
)
// Clone returns a deep copy of a protocol buffer.
func Clone(pb Message) Message {
in := reflect.ValueOf(pb)
if in.IsNil() {
return pb
}
out := reflect.New(in.Type().Elem())
// out is empty so a merge is a deep copy.
mergeStruct(out.Elem(), in.Elem())
return out.Interface().(Message)
}
// Merge merges src into dst.
// Required and optional fields that are set in src will be set to that value in dst.
// Elements of repeated fields will be appended.
// Merge panics if src and dst are not the same type, or if dst is nil.
func Merge(dst, src Message) {
in := reflect.ValueOf(src)
out := reflect.ValueOf(dst)
if out.IsNil() {
panic("proto: nil destination")
}
if in.Type() != out.Type() {
// Explicit test prior to mergeStruct so that mistyped nils will fail
panic("proto: type mismatch")
}
if in.IsNil() {
// Merging nil into non-nil is a quiet no-op
return
}
mergeStruct(out.Elem(), in.Elem())
}
func mergeStruct(out, in reflect.Value) {
sprop := GetProperties(in.Type())
for i := 0; i < in.NumField(); i++ {
f := in.Type().Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
mergeAny(out.Field(i), in.Field(i), false, sprop.Prop[i])
}
if emIn, ok := extendable(in.Addr().Interface()); ok {
emOut, _ := extendable(out.Addr().Interface())
mIn, muIn := emIn.extensionsRead()
if mIn != nil {
mOut := emOut.extensionsWrite()
muIn.Lock()
mergeExtension(mOut, mIn)
muIn.Unlock()
}
}
uf := in.FieldByName("XXX_unrecognized")
if !uf.IsValid() {
return
}
uin := uf.Bytes()
if len(uin) > 0 {
out.FieldByName("XXX_unrecognized").SetBytes(append([]byte(nil), uin...))
}
}
// mergeAny performs a merge between two values of the same type.
// viaPtr indicates whether the values were indirected through a pointer (implying proto2).
// prop is set if this is a struct field (it may be nil).
func mergeAny(out, in reflect.Value, viaPtr bool, prop *Properties) {
if in.Type() == protoMessageType {
if !in.IsNil() {
if out.IsNil() {
out.Set(reflect.ValueOf(Clone(in.Interface().(Message))))
} else {
Merge(out.Interface().(Message), in.Interface().(Message))
}
}
return
}
switch in.Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64:
if !viaPtr && isProto3Zero(in) {
return
}
out.Set(in)
case reflect.Interface:
// Probably a oneof field; copy non-nil values.
if in.IsNil() {
return
}
// Allocate destination if it is not set, or set to a different type.
// Otherwise we will merge as normal.
if out.IsNil() || out.Elem().Type() != in.Elem().Type() {
out.Set(reflect.New(in.Elem().Elem().Type())) // interface -> *T -> T -> new(T)
}
mergeAny(out.Elem(), in.Elem(), false, nil)
case reflect.Map:
if in.Len() == 0 {
return
}
if out.IsNil() {
out.Set(reflect.MakeMap(in.Type()))
}
// For maps with value types of *T or []byte we need to deep copy each value.
elemKind := in.Type().Elem().Kind()
for _, key := range in.MapKeys() {
var val reflect.Value
switch elemKind {
case reflect.Ptr:
val = reflect.New(in.Type().Elem().Elem())
mergeAny(val, in.MapIndex(key), false, nil)
case reflect.Slice:
val = in.MapIndex(key)
val = reflect.ValueOf(append([]byte{}, val.Bytes()...))
default:
val = in.MapIndex(key)
}
out.SetMapIndex(key, val)
}
case reflect.Ptr:
if in.IsNil() {
return
}
if out.IsNil() {
out.Set(reflect.New(in.Elem().Type()))
}
mergeAny(out.Elem(), in.Elem(), true, nil)
case reflect.Slice:
if in.IsNil() {
return
}
if in.Type().Elem().Kind() == reflect.Uint8 {
// []byte is a scalar bytes field, not a repeated field.
// Edge case: if this is in a proto3 message, a zero length
// bytes field is considered the zero value, and should not
// be merged.
if prop != nil && prop.proto3 && in.Len() == 0 {
return
}
// Make a deep copy.
// Append to []byte{} instead of []byte(nil) so that we never end up
// with a nil result.
out.SetBytes(append([]byte{}, in.Bytes()...))
return
}
n := in.Len()
if out.IsNil() {
out.Set(reflect.MakeSlice(in.Type(), 0, n))
}
switch in.Type().Elem().Kind() {
case reflect.Bool, reflect.Float32, reflect.Float64, reflect.Int32, reflect.Int64,
reflect.String, reflect.Uint32, reflect.Uint64:
out.Set(reflect.AppendSlice(out, in))
default:
for i := 0; i < n; i++ {
x := reflect.Indirect(reflect.New(in.Type().Elem()))
mergeAny(x, in.Index(i), false, nil)
out.Set(reflect.Append(out, x))
}
}
case reflect.Struct:
mergeStruct(out, in)
default:
// unknown type, so not a protocol buffer
log.Printf("proto: don't know how to copy %v", in)
}
}
func mergeExtension(out, in map[int32]Extension) {
for extNum, eIn := range in {
eOut := Extension{desc: eIn.desc}
if eIn.value != nil {
v := reflect.New(reflect.TypeOf(eIn.value)).Elem()
mergeAny(v, reflect.ValueOf(eIn.value), false, nil)
eOut.value = v.Interface()
}
if eIn.enc != nil {
eOut.enc = make([]byte, len(eIn.enc))
copy(eOut.enc, eIn.enc)
}
out[extNum] = eOut
}
}

300
vendor/github.com/golang/protobuf/proto/clone_test.go generated vendored Normal file
View File

@ -0,0 +1,300 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto_test
import (
"testing"
"github.com/golang/protobuf/proto"
proto3pb "github.com/golang/protobuf/proto/proto3_proto"
pb "github.com/golang/protobuf/proto/testdata"
)
var cloneTestMessage = &pb.MyMessage{
Count: proto.Int32(42),
Name: proto.String("Dave"),
Pet: []string{"bunny", "kitty", "horsey"},
Inner: &pb.InnerMessage{
Host: proto.String("niles"),
Port: proto.Int32(9099),
Connected: proto.Bool(true),
},
Others: []*pb.OtherMessage{
{
Value: []byte("some bytes"),
},
},
Somegroup: &pb.MyMessage_SomeGroup{
GroupField: proto.Int32(6),
},
RepBytes: [][]byte{[]byte("sham"), []byte("wow")},
}
func init() {
ext := &pb.Ext{
Data: proto.String("extension"),
}
if err := proto.SetExtension(cloneTestMessage, pb.E_Ext_More, ext); err != nil {
panic("SetExtension: " + err.Error())
}
}
func TestClone(t *testing.T) {
m := proto.Clone(cloneTestMessage).(*pb.MyMessage)
if !proto.Equal(m, cloneTestMessage) {
t.Errorf("Clone(%v) = %v", cloneTestMessage, m)
}
// Verify it was a deep copy.
*m.Inner.Port++
if proto.Equal(m, cloneTestMessage) {
t.Error("Mutating clone changed the original")
}
// Byte fields and repeated fields should be copied.
if &m.Pet[0] == &cloneTestMessage.Pet[0] {
t.Error("Pet: repeated field not copied")
}
if &m.Others[0] == &cloneTestMessage.Others[0] {
t.Error("Others: repeated field not copied")
}
if &m.Others[0].Value[0] == &cloneTestMessage.Others[0].Value[0] {
t.Error("Others[0].Value: bytes field not copied")
}
if &m.RepBytes[0] == &cloneTestMessage.RepBytes[0] {
t.Error("RepBytes: repeated field not copied")
}
if &m.RepBytes[0][0] == &cloneTestMessage.RepBytes[0][0] {
t.Error("RepBytes[0]: bytes field not copied")
}
}
func TestCloneNil(t *testing.T) {
var m *pb.MyMessage
if c := proto.Clone(m); !proto.Equal(m, c) {
t.Errorf("Clone(%v) = %v", m, c)
}
}
var mergeTests = []struct {
src, dst, want proto.Message
}{
{
src: &pb.MyMessage{
Count: proto.Int32(42),
},
dst: &pb.MyMessage{
Name: proto.String("Dave"),
},
want: &pb.MyMessage{
Count: proto.Int32(42),
Name: proto.String("Dave"),
},
},
{
src: &pb.MyMessage{
Inner: &pb.InnerMessage{
Host: proto.String("hey"),
Connected: proto.Bool(true),
},
Pet: []string{"horsey"},
Others: []*pb.OtherMessage{
{
Value: []byte("some bytes"),
},
},
},
dst: &pb.MyMessage{
Inner: &pb.InnerMessage{
Host: proto.String("niles"),
Port: proto.Int32(9099),
},
Pet: []string{"bunny", "kitty"},
Others: []*pb.OtherMessage{
{
Key: proto.Int64(31415926535),
},
{
// Explicitly test a src=nil field
Inner: nil,
},
},
},
want: &pb.MyMessage{
Inner: &pb.InnerMessage{
Host: proto.String("hey"),
Connected: proto.Bool(true),
Port: proto.Int32(9099),
},
Pet: []string{"bunny", "kitty", "horsey"},
Others: []*pb.OtherMessage{
{
Key: proto.Int64(31415926535),
},
{},
{
Value: []byte("some bytes"),
},
},
},
},
{
src: &pb.MyMessage{
RepBytes: [][]byte{[]byte("wow")},
},
dst: &pb.MyMessage{
Somegroup: &pb.MyMessage_SomeGroup{
GroupField: proto.Int32(6),
},
RepBytes: [][]byte{[]byte("sham")},
},
want: &pb.MyMessage{
Somegroup: &pb.MyMessage_SomeGroup{
GroupField: proto.Int32(6),
},
RepBytes: [][]byte{[]byte("sham"), []byte("wow")},
},
},
// Check that a scalar bytes field replaces rather than appends.
{
src: &pb.OtherMessage{Value: []byte("foo")},
dst: &pb.OtherMessage{Value: []byte("bar")},
want: &pb.OtherMessage{Value: []byte("foo")},
},
{
src: &pb.MessageWithMap{
NameMapping: map[int32]string{6: "Nigel"},
MsgMapping: map[int64]*pb.FloatingPoint{
0x4001: &pb.FloatingPoint{F: proto.Float64(2.0)},
0x4002: &pb.FloatingPoint{
F: proto.Float64(2.0),
},
},
ByteMapping: map[bool][]byte{true: []byte("wowsa")},
},
dst: &pb.MessageWithMap{
NameMapping: map[int32]string{
6: "Bruce", // should be overwritten
7: "Andrew",
},
MsgMapping: map[int64]*pb.FloatingPoint{
0x4002: &pb.FloatingPoint{
F: proto.Float64(3.0),
Exact: proto.Bool(true),
}, // the entire message should be overwritten
},
},
want: &pb.MessageWithMap{
NameMapping: map[int32]string{
6: "Nigel",
7: "Andrew",
},
MsgMapping: map[int64]*pb.FloatingPoint{
0x4001: &pb.FloatingPoint{F: proto.Float64(2.0)},
0x4002: &pb.FloatingPoint{
F: proto.Float64(2.0),
},
},
ByteMapping: map[bool][]byte{true: []byte("wowsa")},
},
},
// proto3 shouldn't merge zero values,
// in the same way that proto2 shouldn't merge nils.
{
src: &proto3pb.Message{
Name: "Aaron",
Data: []byte(""), // zero value, but not nil
},
dst: &proto3pb.Message{
HeightInCm: 176,
Data: []byte("texas!"),
},
want: &proto3pb.Message{
Name: "Aaron",
HeightInCm: 176,
Data: []byte("texas!"),
},
},
// Oneof fields should merge by assignment.
{
src: &pb.Communique{
Union: &pb.Communique_Number{41},
},
dst: &pb.Communique{
Union: &pb.Communique_Name{"Bobby Tables"},
},
want: &pb.Communique{
Union: &pb.Communique_Number{41},
},
},
// Oneof nil is the same as not set.
{
src: &pb.Communique{},
dst: &pb.Communique{
Union: &pb.Communique_Name{"Bobby Tables"},
},
want: &pb.Communique{
Union: &pb.Communique_Name{"Bobby Tables"},
},
},
{
src: &proto3pb.Message{
Terrain: map[string]*proto3pb.Nested{
"kay_a": &proto3pb.Nested{Cute: true}, // replace
"kay_b": &proto3pb.Nested{Bunny: "rabbit"}, // insert
},
},
dst: &proto3pb.Message{
Terrain: map[string]*proto3pb.Nested{
"kay_a": &proto3pb.Nested{Bunny: "lost"}, // replaced
"kay_c": &proto3pb.Nested{Bunny: "bunny"}, // keep
},
},
want: &proto3pb.Message{
Terrain: map[string]*proto3pb.Nested{
"kay_a": &proto3pb.Nested{Cute: true},
"kay_b": &proto3pb.Nested{Bunny: "rabbit"},
"kay_c": &proto3pb.Nested{Bunny: "bunny"},
},
},
},
}
func TestMerge(t *testing.T) {
for _, m := range mergeTests {
got := proto.Clone(m.dst)
proto.Merge(got, m.src)
if !proto.Equal(got, m.want) {
t.Errorf("Merge(%v, %v)\n got %v\nwant %v\n", m.dst, m.src, got, m.want)
}
}
}

970
vendor/github.com/golang/protobuf/proto/decode.go generated vendored Normal file
View File

@ -0,0 +1,970 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Routines for decoding protocol buffer data to construct in-memory representations.
*/
import (
"errors"
"fmt"
"io"
"os"
"reflect"
)
// errOverflow is returned when an integer is too large to be represented.
var errOverflow = errors.New("proto: integer overflow")
// ErrInternalBadWireType is returned by generated code when an incorrect
// wire type is encountered. It does not get returned to user code.
var ErrInternalBadWireType = errors.New("proto: internal error: bad wiretype for oneof")
// The fundamental decoders that interpret bytes on the wire.
// Those that take integer types all return uint64 and are
// therefore of type valueDecoder.
// DecodeVarint reads a varint-encoded integer from the slice.
// It returns the integer and the number of bytes consumed, or
// zero if there is not enough.
// This is the format for the
// int32, int64, uint32, uint64, bool, and enum
// protocol buffer types.
func DecodeVarint(buf []byte) (x uint64, n int) {
for shift := uint(0); shift < 64; shift += 7 {
if n >= len(buf) {
return 0, 0
}
b := uint64(buf[n])
n++
x |= (b & 0x7F) << shift
if (b & 0x80) == 0 {
return x, n
}
}
// The number is too large to represent in a 64-bit value.
return 0, 0
}
func (p *Buffer) decodeVarintSlow() (x uint64, err error) {
i := p.index
l := len(p.buf)
for shift := uint(0); shift < 64; shift += 7 {
if i >= l {
err = io.ErrUnexpectedEOF
return
}
b := p.buf[i]
i++
x |= (uint64(b) & 0x7F) << shift
if b < 0x80 {
p.index = i
return
}
}
// The number is too large to represent in a 64-bit value.
err = errOverflow
return
}
// DecodeVarint reads a varint-encoded integer from the Buffer.
// This is the format for the
// int32, int64, uint32, uint64, bool, and enum
// protocol buffer types.
func (p *Buffer) DecodeVarint() (x uint64, err error) {
i := p.index
buf := p.buf
if i >= len(buf) {
return 0, io.ErrUnexpectedEOF
} else if buf[i] < 0x80 {
p.index++
return uint64(buf[i]), nil
} else if len(buf)-i < 10 {
return p.decodeVarintSlow()
}
var b uint64
// we already checked the first byte
x = uint64(buf[i]) - 0x80
i++
b = uint64(buf[i])
i++
x += b << 7
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 7
b = uint64(buf[i])
i++
x += b << 14
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 14
b = uint64(buf[i])
i++
x += b << 21
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 21
b = uint64(buf[i])
i++
x += b << 28
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 28
b = uint64(buf[i])
i++
x += b << 35
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 35
b = uint64(buf[i])
i++
x += b << 42
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 42
b = uint64(buf[i])
i++
x += b << 49
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 49
b = uint64(buf[i])
i++
x += b << 56
if b&0x80 == 0 {
goto done
}
x -= 0x80 << 56
b = uint64(buf[i])
i++
x += b << 63
if b&0x80 == 0 {
goto done
}
// x -= 0x80 << 63 // Always zero.
return 0, errOverflow
done:
p.index = i
return x, nil
}
// DecodeFixed64 reads a 64-bit integer from the Buffer.
// This is the format for the
// fixed64, sfixed64, and double protocol buffer types.
func (p *Buffer) DecodeFixed64() (x uint64, err error) {
// x, err already 0
i := p.index + 8
if i < 0 || i > len(p.buf) {
err = io.ErrUnexpectedEOF
return
}
p.index = i
x = uint64(p.buf[i-8])
x |= uint64(p.buf[i-7]) << 8
x |= uint64(p.buf[i-6]) << 16
x |= uint64(p.buf[i-5]) << 24
x |= uint64(p.buf[i-4]) << 32
x |= uint64(p.buf[i-3]) << 40
x |= uint64(p.buf[i-2]) << 48
x |= uint64(p.buf[i-1]) << 56
return
}
// DecodeFixed32 reads a 32-bit integer from the Buffer.
// This is the format for the
// fixed32, sfixed32, and float protocol buffer types.
func (p *Buffer) DecodeFixed32() (x uint64, err error) {
// x, err already 0
i := p.index + 4
if i < 0 || i > len(p.buf) {
err = io.ErrUnexpectedEOF
return
}
p.index = i
x = uint64(p.buf[i-4])
x |= uint64(p.buf[i-3]) << 8
x |= uint64(p.buf[i-2]) << 16
x |= uint64(p.buf[i-1]) << 24
return
}
// DecodeZigzag64 reads a zigzag-encoded 64-bit integer
// from the Buffer.
// This is the format used for the sint64 protocol buffer type.
func (p *Buffer) DecodeZigzag64() (x uint64, err error) {
x, err = p.DecodeVarint()
if err != nil {
return
}
x = (x >> 1) ^ uint64((int64(x&1)<<63)>>63)
return
}
// DecodeZigzag32 reads a zigzag-encoded 32-bit integer
// from the Buffer.
// This is the format used for the sint32 protocol buffer type.
func (p *Buffer) DecodeZigzag32() (x uint64, err error) {
x, err = p.DecodeVarint()
if err != nil {
return
}
x = uint64((uint32(x) >> 1) ^ uint32((int32(x&1)<<31)>>31))
return
}
// These are not ValueDecoders: they produce an array of bytes or a string.
// bytes, embedded messages
// DecodeRawBytes reads a count-delimited byte buffer from the Buffer.
// This is the format used for the bytes protocol buffer
// type and for embedded messages.
func (p *Buffer) DecodeRawBytes(alloc bool) (buf []byte, err error) {
n, err := p.DecodeVarint()
if err != nil {
return nil, err
}
nb := int(n)
if nb < 0 {
return nil, fmt.Errorf("proto: bad byte length %d", nb)
}
end := p.index + nb
if end < p.index || end > len(p.buf) {
return nil, io.ErrUnexpectedEOF
}
if !alloc {
// todo: check if can get more uses of alloc=false
buf = p.buf[p.index:end]
p.index += nb
return
}
buf = make([]byte, nb)
copy(buf, p.buf[p.index:])
p.index += nb
return
}
// DecodeStringBytes reads an encoded string from the Buffer.
// This is the format used for the proto2 string type.
func (p *Buffer) DecodeStringBytes() (s string, err error) {
buf, err := p.DecodeRawBytes(false)
if err != nil {
return
}
return string(buf), nil
}
// Skip the next item in the buffer. Its wire type is decoded and presented as an argument.
// If the protocol buffer has extensions, and the field matches, add it as an extension.
// Otherwise, if the XXX_unrecognized field exists, append the skipped data there.
func (o *Buffer) skipAndSave(t reflect.Type, tag, wire int, base structPointer, unrecField field) error {
oi := o.index
err := o.skip(t, tag, wire)
if err != nil {
return err
}
if !unrecField.IsValid() {
return nil
}
ptr := structPointer_Bytes(base, unrecField)
// Add the skipped field to struct field
obuf := o.buf
o.buf = *ptr
o.EncodeVarint(uint64(tag<<3 | wire))
*ptr = append(o.buf, obuf[oi:o.index]...)
o.buf = obuf
return nil
}
// Skip the next item in the buffer. Its wire type is decoded and presented as an argument.
func (o *Buffer) skip(t reflect.Type, tag, wire int) error {
var u uint64
var err error
switch wire {
case WireVarint:
_, err = o.DecodeVarint()
case WireFixed64:
_, err = o.DecodeFixed64()
case WireBytes:
_, err = o.DecodeRawBytes(false)
case WireFixed32:
_, err = o.DecodeFixed32()
case WireStartGroup:
for {
u, err = o.DecodeVarint()
if err != nil {
break
}
fwire := int(u & 0x7)
if fwire == WireEndGroup {
break
}
ftag := int(u >> 3)
err = o.skip(t, ftag, fwire)
if err != nil {
break
}
}
default:
err = fmt.Errorf("proto: can't skip unknown wire type %d for %s", wire, t)
}
return err
}
// Unmarshaler is the interface representing objects that can
// unmarshal themselves. The method should reset the receiver before
// decoding starts. The argument points to data that may be
// overwritten, so implementations should not keep references to the
// buffer.
type Unmarshaler interface {
Unmarshal([]byte) error
}
// Unmarshal parses the protocol buffer representation in buf and places the
// decoded result in pb. If the struct underlying pb does not match
// the data in buf, the results can be unpredictable.
//
// Unmarshal resets pb before starting to unmarshal, so any
// existing data in pb is always removed. Use UnmarshalMerge
// to preserve and append to existing data.
func Unmarshal(buf []byte, pb Message) error {
pb.Reset()
return UnmarshalMerge(buf, pb)
}
// UnmarshalMerge parses the protocol buffer representation in buf and
// writes the decoded result to pb. If the struct underlying pb does not match
// the data in buf, the results can be unpredictable.
//
// UnmarshalMerge merges into existing data in pb.
// Most code should use Unmarshal instead.
func UnmarshalMerge(buf []byte, pb Message) error {
// If the object can unmarshal itself, let it.
if u, ok := pb.(Unmarshaler); ok {
return u.Unmarshal(buf)
}
return NewBuffer(buf).Unmarshal(pb)
}
// DecodeMessage reads a count-delimited message from the Buffer.
func (p *Buffer) DecodeMessage(pb Message) error {
enc, err := p.DecodeRawBytes(false)
if err != nil {
return err
}
return NewBuffer(enc).Unmarshal(pb)
}
// DecodeGroup reads a tag-delimited group from the Buffer.
func (p *Buffer) DecodeGroup(pb Message) error {
typ, base, err := getbase(pb)
if err != nil {
return err
}
return p.unmarshalType(typ.Elem(), GetProperties(typ.Elem()), true, base)
}
// Unmarshal parses the protocol buffer representation in the
// Buffer and places the decoded result in pb. If the struct
// underlying pb does not match the data in the buffer, the results can be
// unpredictable.
//
// Unlike proto.Unmarshal, this does not reset pb before starting to unmarshal.
func (p *Buffer) Unmarshal(pb Message) error {
// If the object can unmarshal itself, let it.
if u, ok := pb.(Unmarshaler); ok {
err := u.Unmarshal(p.buf[p.index:])
p.index = len(p.buf)
return err
}
typ, base, err := getbase(pb)
if err != nil {
return err
}
err = p.unmarshalType(typ.Elem(), GetProperties(typ.Elem()), false, base)
if collectStats {
stats.Decode++
}
return err
}
// unmarshalType does the work of unmarshaling a structure.
func (o *Buffer) unmarshalType(st reflect.Type, prop *StructProperties, is_group bool, base structPointer) error {
var state errorState
required, reqFields := prop.reqCount, uint64(0)
var err error
for err == nil && o.index < len(o.buf) {
oi := o.index
var u uint64
u, err = o.DecodeVarint()
if err != nil {
break
}
wire := int(u & 0x7)
if wire == WireEndGroup {
if is_group {
if required > 0 {
// Not enough information to determine the exact field.
// (See below.)
return &RequiredNotSetError{"{Unknown}"}
}
return nil // input is satisfied
}
return fmt.Errorf("proto: %s: wiretype end group for non-group", st)
}
tag := int(u >> 3)
if tag <= 0 {
return fmt.Errorf("proto: %s: illegal tag %d (wire type %d)", st, tag, wire)
}
fieldnum, ok := prop.decoderTags.get(tag)
if !ok {
// Maybe it's an extension?
if prop.extendable {
if e, _ := extendable(structPointer_Interface(base, st)); isExtensionField(e, int32(tag)) {
if err = o.skip(st, tag, wire); err == nil {
extmap := e.extensionsWrite()
ext := extmap[int32(tag)] // may be missing
ext.enc = append(ext.enc, o.buf[oi:o.index]...)
extmap[int32(tag)] = ext
}
continue
}
}
// Maybe it's a oneof?
if prop.oneofUnmarshaler != nil {
m := structPointer_Interface(base, st).(Message)
// First return value indicates whether tag is a oneof field.
ok, err = prop.oneofUnmarshaler(m, tag, wire, o)
if err == ErrInternalBadWireType {
// Map the error to something more descriptive.
// Do the formatting here to save generated code space.
err = fmt.Errorf("bad wiretype for oneof field in %T", m)
}
if ok {
continue
}
}
err = o.skipAndSave(st, tag, wire, base, prop.unrecField)
continue
}
p := prop.Prop[fieldnum]
if p.dec == nil {
fmt.Fprintf(os.Stderr, "proto: no protobuf decoder for %s.%s\n", st, st.Field(fieldnum).Name)
continue
}
dec := p.dec
if wire != WireStartGroup && wire != p.WireType {
if wire == WireBytes && p.packedDec != nil {
// a packable field
dec = p.packedDec
} else {
err = fmt.Errorf("proto: bad wiretype for field %s.%s: got wiretype %d, want %d", st, st.Field(fieldnum).Name, wire, p.WireType)
continue
}
}
decErr := dec(o, p, base)
if decErr != nil && !state.shouldContinue(decErr, p) {
err = decErr
}
if err == nil && p.Required {
// Successfully decoded a required field.
if tag <= 64 {
// use bitmap for fields 1-64 to catch field reuse.
var mask uint64 = 1 << uint64(tag-1)
if reqFields&mask == 0 {
// new required field
reqFields |= mask
required--
}
} else {
// This is imprecise. It can be fooled by a required field
// with a tag > 64 that is encoded twice; that's very rare.
// A fully correct implementation would require allocating
// a data structure, which we would like to avoid.
required--
}
}
}
if err == nil {
if is_group {
return io.ErrUnexpectedEOF
}
if state.err != nil {
return state.err
}
if required > 0 {
// Not enough information to determine the exact field. If we use extra
// CPU, we could determine the field only if the missing required field
// has a tag <= 64 and we check reqFields.
return &RequiredNotSetError{"{Unknown}"}
}
}
return err
}
// Individual type decoders
// For each,
// u is the decoded value,
// v is a pointer to the field (pointer) in the struct
// Sizes of the pools to allocate inside the Buffer.
// The goal is modest amortization and allocation
// on at least 16-byte boundaries.
const (
boolPoolSize = 16
uint32PoolSize = 8
uint64PoolSize = 4
)
// Decode a bool.
func (o *Buffer) dec_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
if len(o.bools) == 0 {
o.bools = make([]bool, boolPoolSize)
}
o.bools[0] = u != 0
*structPointer_Bool(base, p.field) = &o.bools[0]
o.bools = o.bools[1:]
return nil
}
func (o *Buffer) dec_proto3_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
*structPointer_BoolVal(base, p.field) = u != 0
return nil
}
// Decode an int32.
func (o *Buffer) dec_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word32_Set(structPointer_Word32(base, p.field), o, uint32(u))
return nil
}
func (o *Buffer) dec_proto3_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word32Val_Set(structPointer_Word32Val(base, p.field), uint32(u))
return nil
}
// Decode an int64.
func (o *Buffer) dec_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word64_Set(structPointer_Word64(base, p.field), o, u)
return nil
}
func (o *Buffer) dec_proto3_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
word64Val_Set(structPointer_Word64Val(base, p.field), o, u)
return nil
}
// Decode a string.
func (o *Buffer) dec_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_String(base, p.field) = &s
return nil
}
func (o *Buffer) dec_proto3_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
*structPointer_StringVal(base, p.field) = s
return nil
}
// Decode a slice of bytes ([]byte).
func (o *Buffer) dec_slice_byte(p *Properties, base structPointer) error {
b, err := o.DecodeRawBytes(true)
if err != nil {
return err
}
*structPointer_Bytes(base, p.field) = b
return nil
}
// Decode a slice of bools ([]bool).
func (o *Buffer) dec_slice_bool(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
v := structPointer_BoolSlice(base, p.field)
*v = append(*v, u != 0)
return nil
}
// Decode a slice of bools ([]bool) in packed format.
func (o *Buffer) dec_slice_packed_bool(p *Properties, base structPointer) error {
v := structPointer_BoolSlice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded bools
fin := o.index + nb
if fin < o.index {
return errOverflow
}
y := *v
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
y = append(y, u != 0)
}
*v = y
return nil
}
// Decode a slice of int32s ([]int32).
func (o *Buffer) dec_slice_int32(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
structPointer_Word32Slice(base, p.field).Append(uint32(u))
return nil
}
// Decode a slice of int32s ([]int32) in packed format.
func (o *Buffer) dec_slice_packed_int32(p *Properties, base structPointer) error {
v := structPointer_Word32Slice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded int32s
fin := o.index + nb
if fin < o.index {
return errOverflow
}
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
v.Append(uint32(u))
}
return nil
}
// Decode a slice of int64s ([]int64).
func (o *Buffer) dec_slice_int64(p *Properties, base structPointer) error {
u, err := p.valDec(o)
if err != nil {
return err
}
structPointer_Word64Slice(base, p.field).Append(u)
return nil
}
// Decode a slice of int64s ([]int64) in packed format.
func (o *Buffer) dec_slice_packed_int64(p *Properties, base structPointer) error {
v := structPointer_Word64Slice(base, p.field)
nn, err := o.DecodeVarint()
if err != nil {
return err
}
nb := int(nn) // number of bytes of encoded int64s
fin := o.index + nb
if fin < o.index {
return errOverflow
}
for o.index < fin {
u, err := p.valDec(o)
if err != nil {
return err
}
v.Append(u)
}
return nil
}
// Decode a slice of strings ([]string).
func (o *Buffer) dec_slice_string(p *Properties, base structPointer) error {
s, err := o.DecodeStringBytes()
if err != nil {
return err
}
v := structPointer_StringSlice(base, p.field)
*v = append(*v, s)
return nil
}
// Decode a slice of slice of bytes ([][]byte).
func (o *Buffer) dec_slice_slice_byte(p *Properties, base structPointer) error {
b, err := o.DecodeRawBytes(true)
if err != nil {
return err
}
v := structPointer_BytesSlice(base, p.field)
*v = append(*v, b)
return nil
}
// Decode a map field.
func (o *Buffer) dec_new_map(p *Properties, base structPointer) error {
raw, err := o.DecodeRawBytes(false)
if err != nil {
return err
}
oi := o.index // index at the end of this map entry
o.index -= len(raw) // move buffer back to start of map entry
mptr := structPointer_NewAt(base, p.field, p.mtype) // *map[K]V
if mptr.Elem().IsNil() {
mptr.Elem().Set(reflect.MakeMap(mptr.Type().Elem()))
}
v := mptr.Elem() // map[K]V
// Prepare addressable doubly-indirect placeholders for the key and value types.
// See enc_new_map for why.
keyptr := reflect.New(reflect.PtrTo(p.mtype.Key())).Elem() // addressable *K
keybase := toStructPointer(keyptr.Addr()) // **K
var valbase structPointer
var valptr reflect.Value
switch p.mtype.Elem().Kind() {
case reflect.Slice:
// []byte
var dummy []byte
valptr = reflect.ValueOf(&dummy) // *[]byte
valbase = toStructPointer(valptr) // *[]byte
case reflect.Ptr:
// message; valptr is **Msg; need to allocate the intermediate pointer
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
valptr.Set(reflect.New(valptr.Type().Elem()))
valbase = toStructPointer(valptr)
default:
// everything else
valptr = reflect.New(reflect.PtrTo(p.mtype.Elem())).Elem() // addressable *V
valbase = toStructPointer(valptr.Addr()) // **V
}
// Decode.
// This parses a restricted wire format, namely the encoding of a message
// with two fields. See enc_new_map for the format.
for o.index < oi {
// tagcode for key and value properties are always a single byte
// because they have tags 1 and 2.
tagcode := o.buf[o.index]
o.index++
switch tagcode {
case p.mkeyprop.tagcode[0]:
if err := p.mkeyprop.dec(o, p.mkeyprop, keybase); err != nil {
return err
}
case p.mvalprop.tagcode[0]:
if err := p.mvalprop.dec(o, p.mvalprop, valbase); err != nil {
return err
}
default:
// TODO: Should we silently skip this instead?
return fmt.Errorf("proto: bad map data tag %d", raw[0])
}
}
keyelem, valelem := keyptr.Elem(), valptr.Elem()
if !keyelem.IsValid() {
keyelem = reflect.Zero(p.mtype.Key())
}
if !valelem.IsValid() {
valelem = reflect.Zero(p.mtype.Elem())
}
v.SetMapIndex(keyelem, valelem)
return nil
}
// Decode a group.
func (o *Buffer) dec_struct_group(p *Properties, base structPointer) error {
bas := structPointer_GetStructPointer(base, p.field)
if structPointer_IsNil(bas) {
// allocate new nested message
bas = toStructPointer(reflect.New(p.stype))
structPointer_SetStructPointer(base, p.field, bas)
}
return o.unmarshalType(p.stype, p.sprop, true, bas)
}
// Decode an embedded message.
func (o *Buffer) dec_struct_message(p *Properties, base structPointer) (err error) {
raw, e := o.DecodeRawBytes(false)
if e != nil {
return e
}
bas := structPointer_GetStructPointer(base, p.field)
if structPointer_IsNil(bas) {
// allocate new nested message
bas = toStructPointer(reflect.New(p.stype))
structPointer_SetStructPointer(base, p.field, bas)
}
// If the object can unmarshal itself, let it.
if p.isUnmarshaler {
iv := structPointer_Interface(bas, p.stype)
return iv.(Unmarshaler).Unmarshal(raw)
}
obuf := o.buf
oi := o.index
o.buf = raw
o.index = 0
err = o.unmarshalType(p.stype, p.sprop, false, bas)
o.buf = obuf
o.index = oi
return err
}
// Decode a slice of embedded messages.
func (o *Buffer) dec_slice_struct_message(p *Properties, base structPointer) error {
return o.dec_slice_struct(p, false, base)
}
// Decode a slice of embedded groups.
func (o *Buffer) dec_slice_struct_group(p *Properties, base structPointer) error {
return o.dec_slice_struct(p, true, base)
}
// Decode a slice of structs ([]*struct).
func (o *Buffer) dec_slice_struct(p *Properties, is_group bool, base structPointer) error {
v := reflect.New(p.stype)
bas := toStructPointer(v)
structPointer_StructPointerSlice(base, p.field).Append(bas)
if is_group {
err := o.unmarshalType(p.stype, p.sprop, is_group, bas)
return err
}
raw, err := o.DecodeRawBytes(false)
if err != nil {
return err
}
// If the object can unmarshal itself, let it.
if p.isUnmarshaler {
iv := v.Interface()
return iv.(Unmarshaler).Unmarshal(raw)
}
obuf := o.buf
oi := o.index
o.buf = raw
o.index = 0
err = o.unmarshalType(p.stype, p.sprop, is_group, bas)
o.buf = obuf
o.index = oi
return err
}

258
vendor/github.com/golang/protobuf/proto/decode_test.go generated vendored Normal file
View File

@ -0,0 +1,258 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// +build go1.7
package proto_test
import (
"fmt"
"testing"
"github.com/golang/protobuf/proto"
tpb "github.com/golang/protobuf/proto/proto3_proto"
)
var (
bytesBlackhole []byte
msgBlackhole = new(tpb.Message)
)
// BenchmarkVarint32ArraySmall shows the performance on an array of small int32 fields (1 and
// 2 bytes long).
func BenchmarkVarint32ArraySmall(b *testing.B) {
for i := uint(1); i <= 10; i++ {
dist := genInt32Dist([7]int{0, 3, 1}, 1<<i)
raw, err := proto.Marshal(&tpb.Message{
ShortKey: dist,
})
if err != nil {
b.Error("wrong encode", err)
}
b.Run(fmt.Sprintf("Len%v", len(dist)), func(b *testing.B) {
scratchBuf := proto.NewBuffer(nil)
b.ResetTimer()
for k := 0; k < b.N; k++ {
scratchBuf.SetBuf(raw)
msgBlackhole.Reset()
if err := scratchBuf.Unmarshal(msgBlackhole); err != nil {
b.Error("wrong decode", err)
}
}
})
}
}
// BenchmarkVarint32ArrayLarge shows the performance on an array of large int32 fields (3 and
// 4 bytes long, with a small number of 1, 2, 5 and 10 byte long versions).
func BenchmarkVarint32ArrayLarge(b *testing.B) {
for i := uint(1); i <= 10; i++ {
dist := genInt32Dist([7]int{0, 1, 2, 4, 8, 1, 1}, 1<<i)
raw, err := proto.Marshal(&tpb.Message{
ShortKey: dist,
})
if err != nil {
b.Error("wrong encode", err)
}
b.Run(fmt.Sprintf("Len%v", len(dist)), func(b *testing.B) {
scratchBuf := proto.NewBuffer(nil)
b.ResetTimer()
for k := 0; k < b.N; k++ {
scratchBuf.SetBuf(raw)
msgBlackhole.Reset()
if err := scratchBuf.Unmarshal(msgBlackhole); err != nil {
b.Error("wrong decode", err)
}
}
})
}
}
// BenchmarkVarint64ArraySmall shows the performance on an array of small int64 fields (1 and
// 2 bytes long).
func BenchmarkVarint64ArraySmall(b *testing.B) {
for i := uint(1); i <= 10; i++ {
dist := genUint64Dist([11]int{0, 3, 1}, 1<<i)
raw, err := proto.Marshal(&tpb.Message{
Key: dist,
})
if err != nil {
b.Error("wrong encode", err)
}
b.Run(fmt.Sprintf("Len%v", len(dist)), func(b *testing.B) {
scratchBuf := proto.NewBuffer(nil)
b.ResetTimer()
for k := 0; k < b.N; k++ {
scratchBuf.SetBuf(raw)
msgBlackhole.Reset()
if err := scratchBuf.Unmarshal(msgBlackhole); err != nil {
b.Error("wrong decode", err)
}
}
})
}
}
// BenchmarkVarint64ArrayLarge shows the performance on an array of large int64 fields (6, 7,
// and 8 bytes long with a small number of the other sizes).
func BenchmarkVarint64ArrayLarge(b *testing.B) {
for i := uint(1); i <= 10; i++ {
dist := genUint64Dist([11]int{0, 1, 1, 2, 4, 8, 16, 32, 16, 1, 1}, 1<<i)
raw, err := proto.Marshal(&tpb.Message{
Key: dist,
})
if err != nil {
b.Error("wrong encode", err)
}
b.Run(fmt.Sprintf("Len%v", len(dist)), func(b *testing.B) {
scratchBuf := proto.NewBuffer(nil)
b.ResetTimer()
for k := 0; k < b.N; k++ {
scratchBuf.SetBuf(raw)
msgBlackhole.Reset()
if err := scratchBuf.Unmarshal(msgBlackhole); err != nil {
b.Error("wrong decode", err)
}
}
})
}
}
// BenchmarkVarint64ArrayMixed shows the performance of lots of small messages, each
// containing a small number of large (3, 4, and 5 byte) repeated int64s.
func BenchmarkVarint64ArrayMixed(b *testing.B) {
for i := uint(1); i <= 1<<5; i <<= 1 {
dist := genUint64Dist([11]int{0, 0, 0, 4, 6, 4, 0, 0, 0, 0, 0}, int(i))
// number of sub fields
for k := uint(1); k <= 1<<10; k <<= 2 {
msg := &tpb.Message{}
for m := uint(0); m < k; m++ {
msg.Children = append(msg.Children, &tpb.Message{
Key: dist,
})
}
raw, err := proto.Marshal(msg)
if err != nil {
b.Error("wrong encode", err)
}
b.Run(fmt.Sprintf("Fields%vLen%v", k, i), func(b *testing.B) {
scratchBuf := proto.NewBuffer(nil)
b.ResetTimer()
for k := 0; k < b.N; k++ {
scratchBuf.SetBuf(raw)
msgBlackhole.Reset()
if err := scratchBuf.Unmarshal(msgBlackhole); err != nil {
b.Error("wrong decode", err)
}
}
})
}
}
}
// genInt32Dist generates a slice of ints that will match the size distribution of dist.
// A size of 6 corresponds to a max length varint32, which is 10 bytes. The distribution
// is 1-indexed. (i.e. the value at index 1 is how many 1 byte ints to create).
func genInt32Dist(dist [7]int, count int) (dest []int32) {
for i := 0; i < count; i++ {
for k := 0; k < len(dist); k++ {
var num int32
switch k {
case 1:
num = 1<<7 - 1
case 2:
num = 1<<14 - 1
case 3:
num = 1<<21 - 1
case 4:
num = 1<<28 - 1
case 5:
num = 1<<29 - 1
case 6:
num = -1
}
for m := 0; m < dist[k]; m++ {
dest = append(dest, num)
}
}
}
return
}
// genUint64Dist generates a slice of ints that will match the size distribution of dist.
// The distribution is 1-indexed. (i.e. the value at index 1 is how many 1 byte ints to create).
func genUint64Dist(dist [11]int, count int) (dest []uint64) {
for i := 0; i < count; i++ {
for k := 0; k < len(dist); k++ {
var num uint64
switch k {
case 1:
num = 1<<7 - 1
case 2:
num = 1<<14 - 1
case 3:
num = 1<<21 - 1
case 4:
num = 1<<28 - 1
case 5:
num = 1<<35 - 1
case 6:
num = 1<<42 - 1
case 7:
num = 1<<49 - 1
case 8:
num = 1<<56 - 1
case 9:
num = 1<<63 - 1
case 10:
num = 1<<64 - 1
}
for m := 0; m < dist[k]; m++ {
dest = append(dest, num)
}
}
}
return
}
// BenchmarkDecodeEmpty measures the overhead of doing the minimal possible decode.
func BenchmarkDecodeEmpty(b *testing.B) {
raw, err := proto.Marshal(&tpb.Message{})
if err != nil {
b.Error("wrong encode", err)
}
b.ResetTimer()
for i := 0; i < b.N; i++ {
if err := proto.Unmarshal(raw, msgBlackhole); err != nil {
b.Error("wrong decode", err)
}
}
}

1362
vendor/github.com/golang/protobuf/proto/encode.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

85
vendor/github.com/golang/protobuf/proto/encode_test.go generated vendored Normal file
View File

@ -0,0 +1,85 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// +build go1.7
package proto_test
import (
"strconv"
"testing"
"github.com/golang/protobuf/proto"
tpb "github.com/golang/protobuf/proto/proto3_proto"
"github.com/golang/protobuf/ptypes"
)
var (
blackhole []byte
)
// BenchmarkAny creates increasingly large arbitrary Any messages. The type is always the
// same.
func BenchmarkAny(b *testing.B) {
data := make([]byte, 1<<20)
quantum := 1 << 10
for i := uint(0); i <= 10; i++ {
b.Run(strconv.Itoa(quantum<<i), func(b *testing.B) {
for k := 0; k < b.N; k++ {
inner := &tpb.Message{
Data: data[:quantum<<i],
}
outer, err := ptypes.MarshalAny(inner)
if err != nil {
b.Error("wrong encode", err)
}
raw, err := proto.Marshal(&tpb.Message{
Anything: outer,
})
if err != nil {
b.Error("wrong encode", err)
}
blackhole = raw
}
})
}
}
// BenchmarkEmpy measures the overhead of doing the minimal possible encode.
func BenchmarkEmpy(b *testing.B) {
for i := 0; i < b.N; i++ {
raw, err := proto.Marshal(&tpb.Message{})
if err != nil {
b.Error("wrong encode", err)
}
blackhole = raw
}
}

300
vendor/github.com/golang/protobuf/proto/equal.go generated vendored Normal file
View File

@ -0,0 +1,300 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Protocol buffer comparison.
package proto
import (
"bytes"
"log"
"reflect"
"strings"
)
/*
Equal returns true iff protocol buffers a and b are equal.
The arguments must both be pointers to protocol buffer structs.
Equality is defined in this way:
- Two messages are equal iff they are the same type,
corresponding fields are equal, unknown field sets
are equal, and extensions sets are equal.
- Two set scalar fields are equal iff their values are equal.
If the fields are of a floating-point type, remember that
NaN != x for all x, including NaN. If the message is defined
in a proto3 .proto file, fields are not "set"; specifically,
zero length proto3 "bytes" fields are equal (nil == {}).
- Two repeated fields are equal iff their lengths are the same,
and their corresponding elements are equal. Note a "bytes" field,
although represented by []byte, is not a repeated field and the
rule for the scalar fields described above applies.
- Two unset fields are equal.
- Two unknown field sets are equal if their current
encoded state is equal.
- Two extension sets are equal iff they have corresponding
elements that are pairwise equal.
- Two map fields are equal iff their lengths are the same,
and they contain the same set of elements. Zero-length map
fields are equal.
- Every other combination of things are not equal.
The return value is undefined if a and b are not protocol buffers.
*/
func Equal(a, b Message) bool {
if a == nil || b == nil {
return a == b
}
v1, v2 := reflect.ValueOf(a), reflect.ValueOf(b)
if v1.Type() != v2.Type() {
return false
}
if v1.Kind() == reflect.Ptr {
if v1.IsNil() {
return v2.IsNil()
}
if v2.IsNil() {
return false
}
v1, v2 = v1.Elem(), v2.Elem()
}
if v1.Kind() != reflect.Struct {
return false
}
return equalStruct(v1, v2)
}
// v1 and v2 are known to have the same type.
func equalStruct(v1, v2 reflect.Value) bool {
sprop := GetProperties(v1.Type())
for i := 0; i < v1.NumField(); i++ {
f := v1.Type().Field(i)
if strings.HasPrefix(f.Name, "XXX_") {
continue
}
f1, f2 := v1.Field(i), v2.Field(i)
if f.Type.Kind() == reflect.Ptr {
if n1, n2 := f1.IsNil(), f2.IsNil(); n1 && n2 {
// both unset
continue
} else if n1 != n2 {
// set/unset mismatch
return false
}
b1, ok := f1.Interface().(raw)
if ok {
b2 := f2.Interface().(raw)
// RawMessage
if !bytes.Equal(b1.Bytes(), b2.Bytes()) {
return false
}
continue
}
f1, f2 = f1.Elem(), f2.Elem()
}
if !equalAny(f1, f2, sprop.Prop[i]) {
return false
}
}
if em1 := v1.FieldByName("XXX_InternalExtensions"); em1.IsValid() {
em2 := v2.FieldByName("XXX_InternalExtensions")
if !equalExtensions(v1.Type(), em1.Interface().(XXX_InternalExtensions), em2.Interface().(XXX_InternalExtensions)) {
return false
}
}
if em1 := v1.FieldByName("XXX_extensions"); em1.IsValid() {
em2 := v2.FieldByName("XXX_extensions")
if !equalExtMap(v1.Type(), em1.Interface().(map[int32]Extension), em2.Interface().(map[int32]Extension)) {
return false
}
}
uf := v1.FieldByName("XXX_unrecognized")
if !uf.IsValid() {
return true
}
u1 := uf.Bytes()
u2 := v2.FieldByName("XXX_unrecognized").Bytes()
if !bytes.Equal(u1, u2) {
return false
}
return true
}
// v1 and v2 are known to have the same type.
// prop may be nil.
func equalAny(v1, v2 reflect.Value, prop *Properties) bool {
if v1.Type() == protoMessageType {
m1, _ := v1.Interface().(Message)
m2, _ := v2.Interface().(Message)
return Equal(m1, m2)
}
switch v1.Kind() {
case reflect.Bool:
return v1.Bool() == v2.Bool()
case reflect.Float32, reflect.Float64:
return v1.Float() == v2.Float()
case reflect.Int32, reflect.Int64:
return v1.Int() == v2.Int()
case reflect.Interface:
// Probably a oneof field; compare the inner values.
n1, n2 := v1.IsNil(), v2.IsNil()
if n1 || n2 {
return n1 == n2
}
e1, e2 := v1.Elem(), v2.Elem()
if e1.Type() != e2.Type() {
return false
}
return equalAny(e1, e2, nil)
case reflect.Map:
if v1.Len() != v2.Len() {
return false
}
for _, key := range v1.MapKeys() {
val2 := v2.MapIndex(key)
if !val2.IsValid() {
// This key was not found in the second map.
return false
}
if !equalAny(v1.MapIndex(key), val2, nil) {
return false
}
}
return true
case reflect.Ptr:
// Maps may have nil values in them, so check for nil.
if v1.IsNil() && v2.IsNil() {
return true
}
if v1.IsNil() != v2.IsNil() {
return false
}
return equalAny(v1.Elem(), v2.Elem(), prop)
case reflect.Slice:
if v1.Type().Elem().Kind() == reflect.Uint8 {
// short circuit: []byte
// Edge case: if this is in a proto3 message, a zero length
// bytes field is considered the zero value.
if prop != nil && prop.proto3 && v1.Len() == 0 && v2.Len() == 0 {
return true
}
if v1.IsNil() != v2.IsNil() {
return false
}
return bytes.Equal(v1.Interface().([]byte), v2.Interface().([]byte))
}
if v1.Len() != v2.Len() {
return false
}
for i := 0; i < v1.Len(); i++ {
if !equalAny(v1.Index(i), v2.Index(i), prop) {
return false
}
}
return true
case reflect.String:
return v1.Interface().(string) == v2.Interface().(string)
case reflect.Struct:
return equalStruct(v1, v2)
case reflect.Uint32, reflect.Uint64:
return v1.Uint() == v2.Uint()
}
// unknown type, so not a protocol buffer
log.Printf("proto: don't know how to compare %v", v1)
return false
}
// base is the struct type that the extensions are based on.
// x1 and x2 are InternalExtensions.
func equalExtensions(base reflect.Type, x1, x2 XXX_InternalExtensions) bool {
em1, _ := x1.extensionsRead()
em2, _ := x2.extensionsRead()
return equalExtMap(base, em1, em2)
}
func equalExtMap(base reflect.Type, em1, em2 map[int32]Extension) bool {
if len(em1) != len(em2) {
return false
}
for extNum, e1 := range em1 {
e2, ok := em2[extNum]
if !ok {
return false
}
m1, m2 := e1.value, e2.value
if m1 != nil && m2 != nil {
// Both are unencoded.
if !equalAny(reflect.ValueOf(m1), reflect.ValueOf(m2), nil) {
return false
}
continue
}
// At least one is encoded. To do a semantically correct comparison
// we need to unmarshal them first.
var desc *ExtensionDesc
if m := extensionMaps[base]; m != nil {
desc = m[extNum]
}
if desc == nil {
log.Printf("proto: don't know how to compare extension %d of %v", extNum, base)
continue
}
var err error
if m1 == nil {
m1, err = decodeExtension(e1.enc, desc)
}
if m2 == nil && err == nil {
m2, err = decodeExtension(e2.enc, desc)
}
if err != nil {
// The encoded form is invalid.
log.Printf("proto: badly encoded extension %d of %v: %v", extNum, base, err)
return false
}
if !equalAny(reflect.ValueOf(m1), reflect.ValueOf(m2), nil) {
return false
}
}
return true
}

224
vendor/github.com/golang/protobuf/proto/equal_test.go generated vendored Normal file
View File

@ -0,0 +1,224 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto_test
import (
"testing"
. "github.com/golang/protobuf/proto"
proto3pb "github.com/golang/protobuf/proto/proto3_proto"
pb "github.com/golang/protobuf/proto/testdata"
)
// Four identical base messages.
// The init function adds extensions to some of them.
var messageWithoutExtension = &pb.MyMessage{Count: Int32(7)}
var messageWithExtension1a = &pb.MyMessage{Count: Int32(7)}
var messageWithExtension1b = &pb.MyMessage{Count: Int32(7)}
var messageWithExtension2 = &pb.MyMessage{Count: Int32(7)}
// Two messages with non-message extensions.
var messageWithInt32Extension1 = &pb.MyMessage{Count: Int32(8)}
var messageWithInt32Extension2 = &pb.MyMessage{Count: Int32(8)}
func init() {
ext1 := &pb.Ext{Data: String("Kirk")}
ext2 := &pb.Ext{Data: String("Picard")}
// messageWithExtension1a has ext1, but never marshals it.
if err := SetExtension(messageWithExtension1a, pb.E_Ext_More, ext1); err != nil {
panic("SetExtension on 1a failed: " + err.Error())
}
// messageWithExtension1b is the unmarshaled form of messageWithExtension1a.
if err := SetExtension(messageWithExtension1b, pb.E_Ext_More, ext1); err != nil {
panic("SetExtension on 1b failed: " + err.Error())
}
buf, err := Marshal(messageWithExtension1b)
if err != nil {
panic("Marshal of 1b failed: " + err.Error())
}
messageWithExtension1b.Reset()
if err := Unmarshal(buf, messageWithExtension1b); err != nil {
panic("Unmarshal of 1b failed: " + err.Error())
}
// messageWithExtension2 has ext2.
if err := SetExtension(messageWithExtension2, pb.E_Ext_More, ext2); err != nil {
panic("SetExtension on 2 failed: " + err.Error())
}
if err := SetExtension(messageWithInt32Extension1, pb.E_Ext_Number, Int32(23)); err != nil {
panic("SetExtension on Int32-1 failed: " + err.Error())
}
if err := SetExtension(messageWithInt32Extension1, pb.E_Ext_Number, Int32(24)); err != nil {
panic("SetExtension on Int32-2 failed: " + err.Error())
}
}
var EqualTests = []struct {
desc string
a, b Message
exp bool
}{
{"different types", &pb.GoEnum{}, &pb.GoTestField{}, false},
{"equal empty", &pb.GoEnum{}, &pb.GoEnum{}, true},
{"nil vs nil", nil, nil, true},
{"typed nil vs typed nil", (*pb.GoEnum)(nil), (*pb.GoEnum)(nil), true},
{"typed nil vs empty", (*pb.GoEnum)(nil), &pb.GoEnum{}, false},
{"different typed nil", (*pb.GoEnum)(nil), (*pb.GoTestField)(nil), false},
{"one set field, one unset field", &pb.GoTestField{Label: String("foo")}, &pb.GoTestField{}, false},
{"one set field zero, one unset field", &pb.GoTest{Param: Int32(0)}, &pb.GoTest{}, false},
{"different set fields", &pb.GoTestField{Label: String("foo")}, &pb.GoTestField{Label: String("bar")}, false},
{"equal set", &pb.GoTestField{Label: String("foo")}, &pb.GoTestField{Label: String("foo")}, true},
{"repeated, one set", &pb.GoTest{F_Int32Repeated: []int32{2, 3}}, &pb.GoTest{}, false},
{"repeated, different length", &pb.GoTest{F_Int32Repeated: []int32{2, 3}}, &pb.GoTest{F_Int32Repeated: []int32{2}}, false},
{"repeated, different value", &pb.GoTest{F_Int32Repeated: []int32{2}}, &pb.GoTest{F_Int32Repeated: []int32{3}}, false},
{"repeated, equal", &pb.GoTest{F_Int32Repeated: []int32{2, 4}}, &pb.GoTest{F_Int32Repeated: []int32{2, 4}}, true},
{"repeated, nil equal nil", &pb.GoTest{F_Int32Repeated: nil}, &pb.GoTest{F_Int32Repeated: nil}, true},
{"repeated, nil equal empty", &pb.GoTest{F_Int32Repeated: nil}, &pb.GoTest{F_Int32Repeated: []int32{}}, true},
{"repeated, empty equal nil", &pb.GoTest{F_Int32Repeated: []int32{}}, &pb.GoTest{F_Int32Repeated: nil}, true},
{
"nested, different",
&pb.GoTest{RequiredField: &pb.GoTestField{Label: String("foo")}},
&pb.GoTest{RequiredField: &pb.GoTestField{Label: String("bar")}},
false,
},
{
"nested, equal",
&pb.GoTest{RequiredField: &pb.GoTestField{Label: String("wow")}},
&pb.GoTest{RequiredField: &pb.GoTestField{Label: String("wow")}},
true,
},
{"bytes", &pb.OtherMessage{Value: []byte("foo")}, &pb.OtherMessage{Value: []byte("foo")}, true},
{"bytes, empty", &pb.OtherMessage{Value: []byte{}}, &pb.OtherMessage{Value: []byte{}}, true},
{"bytes, empty vs nil", &pb.OtherMessage{Value: []byte{}}, &pb.OtherMessage{Value: nil}, false},
{
"repeated bytes",
&pb.MyMessage{RepBytes: [][]byte{[]byte("sham"), []byte("wow")}},
&pb.MyMessage{RepBytes: [][]byte{[]byte("sham"), []byte("wow")}},
true,
},
// In proto3, []byte{} and []byte(nil) are equal.
{"proto3 bytes, empty vs nil", &proto3pb.Message{Data: []byte{}}, &proto3pb.Message{Data: nil}, true},
{"extension vs. no extension", messageWithoutExtension, messageWithExtension1a, false},
{"extension vs. same extension", messageWithExtension1a, messageWithExtension1b, true},
{"extension vs. different extension", messageWithExtension1a, messageWithExtension2, false},
{"int32 extension vs. itself", messageWithInt32Extension1, messageWithInt32Extension1, true},
{"int32 extension vs. a different int32", messageWithInt32Extension1, messageWithInt32Extension2, false},
{
"message with group",
&pb.MyMessage{
Count: Int32(1),
Somegroup: &pb.MyMessage_SomeGroup{
GroupField: Int32(5),
},
},
&pb.MyMessage{
Count: Int32(1),
Somegroup: &pb.MyMessage_SomeGroup{
GroupField: Int32(5),
},
},
true,
},
{
"map same",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
true,
},
{
"map different entry",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
&pb.MessageWithMap{NameMapping: map[int32]string{2: "Rob"}},
false,
},
{
"map different key only",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
&pb.MessageWithMap{NameMapping: map[int32]string{2: "Ken"}},
false,
},
{
"map different value only",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken"}},
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Rob"}},
false,
},
{
"zero-length maps same",
&pb.MessageWithMap{NameMapping: map[int32]string{}},
&pb.MessageWithMap{NameMapping: nil},
true,
},
{
"orders in map don't matter",
&pb.MessageWithMap{NameMapping: map[int32]string{1: "Ken", 2: "Rob"}},
&pb.MessageWithMap{NameMapping: map[int32]string{2: "Rob", 1: "Ken"}},
true,
},
{
"oneof same",
&pb.Communique{Union: &pb.Communique_Number{41}},
&pb.Communique{Union: &pb.Communique_Number{41}},
true,
},
{
"oneof one nil",
&pb.Communique{Union: &pb.Communique_Number{41}},
&pb.Communique{},
false,
},
{
"oneof different",
&pb.Communique{Union: &pb.Communique_Number{41}},
&pb.Communique{Union: &pb.Communique_Name{"Bobby Tables"}},
false,
},
}
func TestEqual(t *testing.T) {
for _, tc := range EqualTests {
if res := Equal(tc.a, tc.b); res != tc.exp {
t.Errorf("%v: Equal(%v, %v) = %v, want %v", tc.desc, tc.a, tc.b, res, tc.exp)
}
}
}

587
vendor/github.com/golang/protobuf/proto/extensions.go generated vendored Normal file
View File

@ -0,0 +1,587 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Types and routines for supporting protocol buffer extensions.
*/
import (
"errors"
"fmt"
"reflect"
"strconv"
"sync"
)
// ErrMissingExtension is the error returned by GetExtension if the named extension is not in the message.
var ErrMissingExtension = errors.New("proto: missing extension")
// ExtensionRange represents a range of message extensions for a protocol buffer.
// Used in code generated by the protocol compiler.
type ExtensionRange struct {
Start, End int32 // both inclusive
}
// extendableProto is an interface implemented by any protocol buffer generated by the current
// proto compiler that may be extended.
type extendableProto interface {
Message
ExtensionRangeArray() []ExtensionRange
extensionsWrite() map[int32]Extension
extensionsRead() (map[int32]Extension, sync.Locker)
}
// extendableProtoV1 is an interface implemented by a protocol buffer generated by the previous
// version of the proto compiler that may be extended.
type extendableProtoV1 interface {
Message
ExtensionRangeArray() []ExtensionRange
ExtensionMap() map[int32]Extension
}
// extensionAdapter is a wrapper around extendableProtoV1 that implements extendableProto.
type extensionAdapter struct {
extendableProtoV1
}
func (e extensionAdapter) extensionsWrite() map[int32]Extension {
return e.ExtensionMap()
}
func (e extensionAdapter) extensionsRead() (map[int32]Extension, sync.Locker) {
return e.ExtensionMap(), notLocker{}
}
// notLocker is a sync.Locker whose Lock and Unlock methods are nops.
type notLocker struct{}
func (n notLocker) Lock() {}
func (n notLocker) Unlock() {}
// extendable returns the extendableProto interface for the given generated proto message.
// If the proto message has the old extension format, it returns a wrapper that implements
// the extendableProto interface.
func extendable(p interface{}) (extendableProto, bool) {
if ep, ok := p.(extendableProto); ok {
return ep, ok
}
if ep, ok := p.(extendableProtoV1); ok {
return extensionAdapter{ep}, ok
}
return nil, false
}
// XXX_InternalExtensions is an internal representation of proto extensions.
//
// Each generated message struct type embeds an anonymous XXX_InternalExtensions field,
// thus gaining the unexported 'extensions' method, which can be called only from the proto package.
//
// The methods of XXX_InternalExtensions are not concurrency safe in general,
// but calls to logically read-only methods such as has and get may be executed concurrently.
type XXX_InternalExtensions struct {
// The struct must be indirect so that if a user inadvertently copies a
// generated message and its embedded XXX_InternalExtensions, they
// avoid the mayhem of a copied mutex.
//
// The mutex serializes all logically read-only operations to p.extensionMap.
// It is up to the client to ensure that write operations to p.extensionMap are
// mutually exclusive with other accesses.
p *struct {
mu sync.Mutex
extensionMap map[int32]Extension
}
}
// extensionsWrite returns the extension map, creating it on first use.
func (e *XXX_InternalExtensions) extensionsWrite() map[int32]Extension {
if e.p == nil {
e.p = new(struct {
mu sync.Mutex
extensionMap map[int32]Extension
})
e.p.extensionMap = make(map[int32]Extension)
}
return e.p.extensionMap
}
// extensionsRead returns the extensions map for read-only use. It may be nil.
// The caller must hold the returned mutex's lock when accessing Elements within the map.
func (e *XXX_InternalExtensions) extensionsRead() (map[int32]Extension, sync.Locker) {
if e.p == nil {
return nil, nil
}
return e.p.extensionMap, &e.p.mu
}
var extendableProtoType = reflect.TypeOf((*extendableProto)(nil)).Elem()
var extendableProtoV1Type = reflect.TypeOf((*extendableProtoV1)(nil)).Elem()
// ExtensionDesc represents an extension specification.
// Used in generated code from the protocol compiler.
type ExtensionDesc struct {
ExtendedType Message // nil pointer to the type that is being extended
ExtensionType interface{} // nil pointer to the extension type
Field int32 // field number
Name string // fully-qualified name of extension, for text formatting
Tag string // protobuf tag style
Filename string // name of the file in which the extension is defined
}
func (ed *ExtensionDesc) repeated() bool {
t := reflect.TypeOf(ed.ExtensionType)
return t.Kind() == reflect.Slice && t.Elem().Kind() != reflect.Uint8
}
// Extension represents an extension in a message.
type Extension struct {
// When an extension is stored in a message using SetExtension
// only desc and value are set. When the message is marshaled
// enc will be set to the encoded form of the message.
//
// When a message is unmarshaled and contains extensions, each
// extension will have only enc set. When such an extension is
// accessed using GetExtension (or GetExtensions) desc and value
// will be set.
desc *ExtensionDesc
value interface{}
enc []byte
}
// SetRawExtension is for testing only.
func SetRawExtension(base Message, id int32, b []byte) {
epb, ok := extendable(base)
if !ok {
return
}
extmap := epb.extensionsWrite()
extmap[id] = Extension{enc: b}
}
// isExtensionField returns true iff the given field number is in an extension range.
func isExtensionField(pb extendableProto, field int32) bool {
for _, er := range pb.ExtensionRangeArray() {
if er.Start <= field && field <= er.End {
return true
}
}
return false
}
// checkExtensionTypes checks that the given extension is valid for pb.
func checkExtensionTypes(pb extendableProto, extension *ExtensionDesc) error {
var pbi interface{} = pb
// Check the extended type.
if ea, ok := pbi.(extensionAdapter); ok {
pbi = ea.extendableProtoV1
}
if a, b := reflect.TypeOf(pbi), reflect.TypeOf(extension.ExtendedType); a != b {
return errors.New("proto: bad extended type; " + b.String() + " does not extend " + a.String())
}
// Check the range.
if !isExtensionField(pb, extension.Field) {
return errors.New("proto: bad extension number; not in declared ranges")
}
return nil
}
// extPropKey is sufficient to uniquely identify an extension.
type extPropKey struct {
base reflect.Type
field int32
}
var extProp = struct {
sync.RWMutex
m map[extPropKey]*Properties
}{
m: make(map[extPropKey]*Properties),
}
func extensionProperties(ed *ExtensionDesc) *Properties {
key := extPropKey{base: reflect.TypeOf(ed.ExtendedType), field: ed.Field}
extProp.RLock()
if prop, ok := extProp.m[key]; ok {
extProp.RUnlock()
return prop
}
extProp.RUnlock()
extProp.Lock()
defer extProp.Unlock()
// Check again.
if prop, ok := extProp.m[key]; ok {
return prop
}
prop := new(Properties)
prop.Init(reflect.TypeOf(ed.ExtensionType), "unknown_name", ed.Tag, nil)
extProp.m[key] = prop
return prop
}
// encode encodes any unmarshaled (unencoded) extensions in e.
func encodeExtensions(e *XXX_InternalExtensions) error {
m, mu := e.extensionsRead()
if m == nil {
return nil // fast path
}
mu.Lock()
defer mu.Unlock()
return encodeExtensionsMap(m)
}
// encode encodes any unmarshaled (unencoded) extensions in e.
func encodeExtensionsMap(m map[int32]Extension) error {
for k, e := range m {
if e.value == nil || e.desc == nil {
// Extension is only in its encoded form.
continue
}
// We don't skip extensions that have an encoded form set,
// because the extension value may have been mutated after
// the last time this function was called.
et := reflect.TypeOf(e.desc.ExtensionType)
props := extensionProperties(e.desc)
p := NewBuffer(nil)
// If e.value has type T, the encoder expects a *struct{ X T }.
// Pass a *T with a zero field and hope it all works out.
x := reflect.New(et)
x.Elem().Set(reflect.ValueOf(e.value))
if err := props.enc(p, props, toStructPointer(x)); err != nil {
return err
}
e.enc = p.buf
m[k] = e
}
return nil
}
func extensionsSize(e *XXX_InternalExtensions) (n int) {
m, mu := e.extensionsRead()
if m == nil {
return 0
}
mu.Lock()
defer mu.Unlock()
return extensionsMapSize(m)
}
func extensionsMapSize(m map[int32]Extension) (n int) {
for _, e := range m {
if e.value == nil || e.desc == nil {
// Extension is only in its encoded form.
n += len(e.enc)
continue
}
// We don't skip extensions that have an encoded form set,
// because the extension value may have been mutated after
// the last time this function was called.
et := reflect.TypeOf(e.desc.ExtensionType)
props := extensionProperties(e.desc)
// If e.value has type T, the encoder expects a *struct{ X T }.
// Pass a *T with a zero field and hope it all works out.
x := reflect.New(et)
x.Elem().Set(reflect.ValueOf(e.value))
n += props.size(props, toStructPointer(x))
}
return
}
// HasExtension returns whether the given extension is present in pb.
func HasExtension(pb Message, extension *ExtensionDesc) bool {
// TODO: Check types, field numbers, etc.?
epb, ok := extendable(pb)
if !ok {
return false
}
extmap, mu := epb.extensionsRead()
if extmap == nil {
return false
}
mu.Lock()
_, ok = extmap[extension.Field]
mu.Unlock()
return ok
}
// ClearExtension removes the given extension from pb.
func ClearExtension(pb Message, extension *ExtensionDesc) {
epb, ok := extendable(pb)
if !ok {
return
}
// TODO: Check types, field numbers, etc.?
extmap := epb.extensionsWrite()
delete(extmap, extension.Field)
}
// GetExtension parses and returns the given extension of pb.
// If the extension is not present and has no default value it returns ErrMissingExtension.
func GetExtension(pb Message, extension *ExtensionDesc) (interface{}, error) {
epb, ok := extendable(pb)
if !ok {
return nil, errors.New("proto: not an extendable proto")
}
if err := checkExtensionTypes(epb, extension); err != nil {
return nil, err
}
emap, mu := epb.extensionsRead()
if emap == nil {
return defaultExtensionValue(extension)
}
mu.Lock()
defer mu.Unlock()
e, ok := emap[extension.Field]
if !ok {
// defaultExtensionValue returns the default value or
// ErrMissingExtension if there is no default.
return defaultExtensionValue(extension)
}
if e.value != nil {
// Already decoded. Check the descriptor, though.
if e.desc != extension {
// This shouldn't happen. If it does, it means that
// GetExtension was called twice with two different
// descriptors with the same field number.
return nil, errors.New("proto: descriptor conflict")
}
return e.value, nil
}
v, err := decodeExtension(e.enc, extension)
if err != nil {
return nil, err
}
// Remember the decoded version and drop the encoded version.
// That way it is safe to mutate what we return.
e.value = v
e.desc = extension
e.enc = nil
emap[extension.Field] = e
return e.value, nil
}
// defaultExtensionValue returns the default value for extension.
// If no default for an extension is defined ErrMissingExtension is returned.
func defaultExtensionValue(extension *ExtensionDesc) (interface{}, error) {
t := reflect.TypeOf(extension.ExtensionType)
props := extensionProperties(extension)
sf, _, err := fieldDefault(t, props)
if err != nil {
return nil, err
}
if sf == nil || sf.value == nil {
// There is no default value.
return nil, ErrMissingExtension
}
if t.Kind() != reflect.Ptr {
// We do not need to return a Ptr, we can directly return sf.value.
return sf.value, nil
}
// We need to return an interface{} that is a pointer to sf.value.
value := reflect.New(t).Elem()
value.Set(reflect.New(value.Type().Elem()))
if sf.kind == reflect.Int32 {
// We may have an int32 or an enum, but the underlying data is int32.
// Since we can't set an int32 into a non int32 reflect.value directly
// set it as a int32.
value.Elem().SetInt(int64(sf.value.(int32)))
} else {
value.Elem().Set(reflect.ValueOf(sf.value))
}
return value.Interface(), nil
}
// decodeExtension decodes an extension encoded in b.
func decodeExtension(b []byte, extension *ExtensionDesc) (interface{}, error) {
o := NewBuffer(b)
t := reflect.TypeOf(extension.ExtensionType)
props := extensionProperties(extension)
// t is a pointer to a struct, pointer to basic type or a slice.
// Allocate a "field" to store the pointer/slice itself; the
// pointer/slice will be stored here. We pass
// the address of this field to props.dec.
// This passes a zero field and a *t and lets props.dec
// interpret it as a *struct{ x t }.
value := reflect.New(t).Elem()
for {
// Discard wire type and field number varint. It isn't needed.
if _, err := o.DecodeVarint(); err != nil {
return nil, err
}
if err := props.dec(o, props, toStructPointer(value.Addr())); err != nil {
return nil, err
}
if o.index >= len(o.buf) {
break
}
}
return value.Interface(), nil
}
// GetExtensions returns a slice of the extensions present in pb that are also listed in es.
// The returned slice has the same length as es; missing extensions will appear as nil elements.
func GetExtensions(pb Message, es []*ExtensionDesc) (extensions []interface{}, err error) {
epb, ok := extendable(pb)
if !ok {
return nil, errors.New("proto: not an extendable proto")
}
extensions = make([]interface{}, len(es))
for i, e := range es {
extensions[i], err = GetExtension(epb, e)
if err == ErrMissingExtension {
err = nil
}
if err != nil {
return
}
}
return
}
// ExtensionDescs returns a new slice containing pb's extension descriptors, in undefined order.
// For non-registered extensions, ExtensionDescs returns an incomplete descriptor containing
// just the Field field, which defines the extension's field number.
func ExtensionDescs(pb Message) ([]*ExtensionDesc, error) {
epb, ok := extendable(pb)
if !ok {
return nil, fmt.Errorf("proto: %T is not an extendable proto.Message", pb)
}
registeredExtensions := RegisteredExtensions(pb)
emap, mu := epb.extensionsRead()
if emap == nil {
return nil, nil
}
mu.Lock()
defer mu.Unlock()
extensions := make([]*ExtensionDesc, 0, len(emap))
for extid, e := range emap {
desc := e.desc
if desc == nil {
desc = registeredExtensions[extid]
if desc == nil {
desc = &ExtensionDesc{Field: extid}
}
}
extensions = append(extensions, desc)
}
return extensions, nil
}
// SetExtension sets the specified extension of pb to the specified value.
func SetExtension(pb Message, extension *ExtensionDesc, value interface{}) error {
epb, ok := extendable(pb)
if !ok {
return errors.New("proto: not an extendable proto")
}
if err := checkExtensionTypes(epb, extension); err != nil {
return err
}
typ := reflect.TypeOf(extension.ExtensionType)
if typ != reflect.TypeOf(value) {
return errors.New("proto: bad extension value type")
}
// nil extension values need to be caught early, because the
// encoder can't distinguish an ErrNil due to a nil extension
// from an ErrNil due to a missing field. Extensions are
// always optional, so the encoder would just swallow the error
// and drop all the extensions from the encoded message.
if reflect.ValueOf(value).IsNil() {
return fmt.Errorf("proto: SetExtension called with nil value of type %T", value)
}
extmap := epb.extensionsWrite()
extmap[extension.Field] = Extension{desc: extension, value: value}
return nil
}
// ClearAllExtensions clears all extensions from pb.
func ClearAllExtensions(pb Message) {
epb, ok := extendable(pb)
if !ok {
return
}
m := epb.extensionsWrite()
for k := range m {
delete(m, k)
}
}
// A global registry of extensions.
// The generated code will register the generated descriptors by calling RegisterExtension.
var extensionMaps = make(map[reflect.Type]map[int32]*ExtensionDesc)
// RegisterExtension is called from the generated code.
func RegisterExtension(desc *ExtensionDesc) {
st := reflect.TypeOf(desc.ExtendedType).Elem()
m := extensionMaps[st]
if m == nil {
m = make(map[int32]*ExtensionDesc)
extensionMaps[st] = m
}
if _, ok := m[desc.Field]; ok {
panic("proto: duplicate extension registered: " + st.String() + " " + strconv.Itoa(int(desc.Field)))
}
m[desc.Field] = desc
}
// RegisteredExtensions returns a map of the registered extensions of a
// protocol buffer struct, indexed by the extension number.
// The argument pb should be a nil pointer to the struct type.
func RegisteredExtensions(pb Message) map[int32]*ExtensionDesc {
return extensionMaps[reflect.TypeOf(pb).Elem()]
}

View File

@ -0,0 +1,536 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2014 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto_test
import (
"bytes"
"fmt"
"reflect"
"sort"
"testing"
"github.com/golang/protobuf/proto"
pb "github.com/golang/protobuf/proto/testdata"
"golang.org/x/sync/errgroup"
)
func TestGetExtensionsWithMissingExtensions(t *testing.T) {
msg := &pb.MyMessage{}
ext1 := &pb.Ext{}
if err := proto.SetExtension(msg, pb.E_Ext_More, ext1); err != nil {
t.Fatalf("Could not set ext1: %s", err)
}
exts, err := proto.GetExtensions(msg, []*proto.ExtensionDesc{
pb.E_Ext_More,
pb.E_Ext_Text,
})
if err != nil {
t.Fatalf("GetExtensions() failed: %s", err)
}
if exts[0] != ext1 {
t.Errorf("ext1 not in returned extensions: %T %v", exts[0], exts[0])
}
if exts[1] != nil {
t.Errorf("ext2 in returned extensions: %T %v", exts[1], exts[1])
}
}
func TestExtensionDescsWithMissingExtensions(t *testing.T) {
msg := &pb.MyMessage{Count: proto.Int32(0)}
extdesc1 := pb.E_Ext_More
if descs, err := proto.ExtensionDescs(msg); len(descs) != 0 || err != nil {
t.Errorf("proto.ExtensionDescs: got %d descs, error %v; want 0, nil", len(descs), err)
}
ext1 := &pb.Ext{}
if err := proto.SetExtension(msg, extdesc1, ext1); err != nil {
t.Fatalf("Could not set ext1: %s", err)
}
extdesc2 := &proto.ExtensionDesc{
ExtendedType: (*pb.MyMessage)(nil),
ExtensionType: (*bool)(nil),
Field: 123456789,
Name: "a.b",
Tag: "varint,123456789,opt",
}
ext2 := proto.Bool(false)
if err := proto.SetExtension(msg, extdesc2, ext2); err != nil {
t.Fatalf("Could not set ext2: %s", err)
}
b, err := proto.Marshal(msg)
if err != nil {
t.Fatalf("Could not marshal msg: %v", err)
}
if err := proto.Unmarshal(b, msg); err != nil {
t.Fatalf("Could not unmarshal into msg: %v", err)
}
descs, err := proto.ExtensionDescs(msg)
if err != nil {
t.Fatalf("proto.ExtensionDescs: got error %v", err)
}
sortExtDescs(descs)
wantDescs := []*proto.ExtensionDesc{extdesc1, &proto.ExtensionDesc{Field: extdesc2.Field}}
if !reflect.DeepEqual(descs, wantDescs) {
t.Errorf("proto.ExtensionDescs(msg) sorted extension ids: got %+v, want %+v", descs, wantDescs)
}
}
type ExtensionDescSlice []*proto.ExtensionDesc
func (s ExtensionDescSlice) Len() int { return len(s) }
func (s ExtensionDescSlice) Less(i, j int) bool { return s[i].Field < s[j].Field }
func (s ExtensionDescSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func sortExtDescs(s []*proto.ExtensionDesc) {
sort.Sort(ExtensionDescSlice(s))
}
func TestGetExtensionStability(t *testing.T) {
check := func(m *pb.MyMessage) bool {
ext1, err := proto.GetExtension(m, pb.E_Ext_More)
if err != nil {
t.Fatalf("GetExtension() failed: %s", err)
}
ext2, err := proto.GetExtension(m, pb.E_Ext_More)
if err != nil {
t.Fatalf("GetExtension() failed: %s", err)
}
return ext1 == ext2
}
msg := &pb.MyMessage{Count: proto.Int32(4)}
ext0 := &pb.Ext{}
if err := proto.SetExtension(msg, pb.E_Ext_More, ext0); err != nil {
t.Fatalf("Could not set ext1: %s", ext0)
}
if !check(msg) {
t.Errorf("GetExtension() not stable before marshaling")
}
bb, err := proto.Marshal(msg)
if err != nil {
t.Fatalf("Marshal() failed: %s", err)
}
msg1 := &pb.MyMessage{}
err = proto.Unmarshal(bb, msg1)
if err != nil {
t.Fatalf("Unmarshal() failed: %s", err)
}
if !check(msg1) {
t.Errorf("GetExtension() not stable after unmarshaling")
}
}
func TestGetExtensionDefaults(t *testing.T) {
var setFloat64 float64 = 1
var setFloat32 float32 = 2
var setInt32 int32 = 3
var setInt64 int64 = 4
var setUint32 uint32 = 5
var setUint64 uint64 = 6
var setBool = true
var setBool2 = false
var setString = "Goodnight string"
var setBytes = []byte("Goodnight bytes")
var setEnum = pb.DefaultsMessage_TWO
type testcase struct {
ext *proto.ExtensionDesc // Extension we are testing.
want interface{} // Expected value of extension, or nil (meaning that GetExtension will fail).
def interface{} // Expected value of extension after ClearExtension().
}
tests := []testcase{
{pb.E_NoDefaultDouble, setFloat64, nil},
{pb.E_NoDefaultFloat, setFloat32, nil},
{pb.E_NoDefaultInt32, setInt32, nil},
{pb.E_NoDefaultInt64, setInt64, nil},
{pb.E_NoDefaultUint32, setUint32, nil},
{pb.E_NoDefaultUint64, setUint64, nil},
{pb.E_NoDefaultSint32, setInt32, nil},
{pb.E_NoDefaultSint64, setInt64, nil},
{pb.E_NoDefaultFixed32, setUint32, nil},
{pb.E_NoDefaultFixed64, setUint64, nil},
{pb.E_NoDefaultSfixed32, setInt32, nil},
{pb.E_NoDefaultSfixed64, setInt64, nil},
{pb.E_NoDefaultBool, setBool, nil},
{pb.E_NoDefaultBool, setBool2, nil},
{pb.E_NoDefaultString, setString, nil},
{pb.E_NoDefaultBytes, setBytes, nil},
{pb.E_NoDefaultEnum, setEnum, nil},
{pb.E_DefaultDouble, setFloat64, float64(3.1415)},
{pb.E_DefaultFloat, setFloat32, float32(3.14)},
{pb.E_DefaultInt32, setInt32, int32(42)},
{pb.E_DefaultInt64, setInt64, int64(43)},
{pb.E_DefaultUint32, setUint32, uint32(44)},
{pb.E_DefaultUint64, setUint64, uint64(45)},
{pb.E_DefaultSint32, setInt32, int32(46)},
{pb.E_DefaultSint64, setInt64, int64(47)},
{pb.E_DefaultFixed32, setUint32, uint32(48)},
{pb.E_DefaultFixed64, setUint64, uint64(49)},
{pb.E_DefaultSfixed32, setInt32, int32(50)},
{pb.E_DefaultSfixed64, setInt64, int64(51)},
{pb.E_DefaultBool, setBool, true},
{pb.E_DefaultBool, setBool2, true},
{pb.E_DefaultString, setString, "Hello, string"},
{pb.E_DefaultBytes, setBytes, []byte("Hello, bytes")},
{pb.E_DefaultEnum, setEnum, pb.DefaultsMessage_ONE},
}
checkVal := func(test testcase, msg *pb.DefaultsMessage, valWant interface{}) error {
val, err := proto.GetExtension(msg, test.ext)
if err != nil {
if valWant != nil {
return fmt.Errorf("GetExtension(): %s", err)
}
if want := proto.ErrMissingExtension; err != want {
return fmt.Errorf("Unexpected error: got %v, want %v", err, want)
}
return nil
}
// All proto2 extension values are either a pointer to a value or a slice of values.
ty := reflect.TypeOf(val)
tyWant := reflect.TypeOf(test.ext.ExtensionType)
if got, want := ty, tyWant; got != want {
return fmt.Errorf("unexpected reflect.TypeOf(): got %v want %v", got, want)
}
tye := ty.Elem()
tyeWant := tyWant.Elem()
if got, want := tye, tyeWant; got != want {
return fmt.Errorf("unexpected reflect.TypeOf().Elem(): got %v want %v", got, want)
}
// Check the name of the type of the value.
// If it is an enum it will be type int32 with the name of the enum.
if got, want := tye.Name(), tye.Name(); got != want {
return fmt.Errorf("unexpected reflect.TypeOf().Elem().Name(): got %v want %v", got, want)
}
// Check that value is what we expect.
// If we have a pointer in val, get the value it points to.
valExp := val
if ty.Kind() == reflect.Ptr {
valExp = reflect.ValueOf(val).Elem().Interface()
}
if got, want := valExp, valWant; !reflect.DeepEqual(got, want) {
return fmt.Errorf("unexpected reflect.DeepEqual(): got %v want %v", got, want)
}
return nil
}
setTo := func(test testcase) interface{} {
setTo := reflect.ValueOf(test.want)
if typ := reflect.TypeOf(test.ext.ExtensionType); typ.Kind() == reflect.Ptr {
setTo = reflect.New(typ).Elem()
setTo.Set(reflect.New(setTo.Type().Elem()))
setTo.Elem().Set(reflect.ValueOf(test.want))
}
return setTo.Interface()
}
for _, test := range tests {
msg := &pb.DefaultsMessage{}
name := test.ext.Name
// Check the initial value.
if err := checkVal(test, msg, test.def); err != nil {
t.Errorf("%s: %v", name, err)
}
// Set the per-type value and check value.
name = fmt.Sprintf("%s (set to %T %v)", name, test.want, test.want)
if err := proto.SetExtension(msg, test.ext, setTo(test)); err != nil {
t.Errorf("%s: SetExtension(): %v", name, err)
continue
}
if err := checkVal(test, msg, test.want); err != nil {
t.Errorf("%s: %v", name, err)
continue
}
// Set and check the value.
name += " (cleared)"
proto.ClearExtension(msg, test.ext)
if err := checkVal(test, msg, test.def); err != nil {
t.Errorf("%s: %v", name, err)
}
}
}
func TestExtensionsRoundTrip(t *testing.T) {
msg := &pb.MyMessage{}
ext1 := &pb.Ext{
Data: proto.String("hi"),
}
ext2 := &pb.Ext{
Data: proto.String("there"),
}
exists := proto.HasExtension(msg, pb.E_Ext_More)
if exists {
t.Error("Extension More present unexpectedly")
}
if err := proto.SetExtension(msg, pb.E_Ext_More, ext1); err != nil {
t.Error(err)
}
if err := proto.SetExtension(msg, pb.E_Ext_More, ext2); err != nil {
t.Error(err)
}
e, err := proto.GetExtension(msg, pb.E_Ext_More)
if err != nil {
t.Error(err)
}
x, ok := e.(*pb.Ext)
if !ok {
t.Errorf("e has type %T, expected testdata.Ext", e)
} else if *x.Data != "there" {
t.Errorf("SetExtension failed to overwrite, got %+v, not 'there'", x)
}
proto.ClearExtension(msg, pb.E_Ext_More)
if _, err = proto.GetExtension(msg, pb.E_Ext_More); err != proto.ErrMissingExtension {
t.Errorf("got %v, expected ErrMissingExtension", e)
}
if _, err := proto.GetExtension(msg, pb.E_X215); err == nil {
t.Error("expected bad extension error, got nil")
}
if err := proto.SetExtension(msg, pb.E_X215, 12); err == nil {
t.Error("expected extension err")
}
if err := proto.SetExtension(msg, pb.E_Ext_More, 12); err == nil {
t.Error("expected some sort of type mismatch error, got nil")
}
}
func TestNilExtension(t *testing.T) {
msg := &pb.MyMessage{
Count: proto.Int32(1),
}
if err := proto.SetExtension(msg, pb.E_Ext_Text, proto.String("hello")); err != nil {
t.Fatal(err)
}
if err := proto.SetExtension(msg, pb.E_Ext_More, (*pb.Ext)(nil)); err == nil {
t.Error("expected SetExtension to fail due to a nil extension")
} else if want := "proto: SetExtension called with nil value of type *testdata.Ext"; err.Error() != want {
t.Errorf("expected error %v, got %v", want, err)
}
// Note: if the behavior of Marshal is ever changed to ignore nil extensions, update
// this test to verify that E_Ext_Text is properly propagated through marshal->unmarshal.
}
func TestMarshalUnmarshalRepeatedExtension(t *testing.T) {
// Add a repeated extension to the result.
tests := []struct {
name string
ext []*pb.ComplexExtension
}{
{
"two fields",
[]*pb.ComplexExtension{
{First: proto.Int32(7)},
{Second: proto.Int32(11)},
},
},
{
"repeated field",
[]*pb.ComplexExtension{
{Third: []int32{1000}},
{Third: []int32{2000}},
},
},
{
"two fields and repeated field",
[]*pb.ComplexExtension{
{Third: []int32{1000}},
{First: proto.Int32(9)},
{Second: proto.Int32(21)},
{Third: []int32{2000}},
},
},
}
for _, test := range tests {
// Marshal message with a repeated extension.
msg1 := new(pb.OtherMessage)
err := proto.SetExtension(msg1, pb.E_RComplex, test.ext)
if err != nil {
t.Fatalf("[%s] Error setting extension: %v", test.name, err)
}
b, err := proto.Marshal(msg1)
if err != nil {
t.Fatalf("[%s] Error marshaling message: %v", test.name, err)
}
// Unmarshal and read the merged proto.
msg2 := new(pb.OtherMessage)
err = proto.Unmarshal(b, msg2)
if err != nil {
t.Fatalf("[%s] Error unmarshaling message: %v", test.name, err)
}
e, err := proto.GetExtension(msg2, pb.E_RComplex)
if err != nil {
t.Fatalf("[%s] Error getting extension: %v", test.name, err)
}
ext := e.([]*pb.ComplexExtension)
if ext == nil {
t.Fatalf("[%s] Invalid extension", test.name)
}
if !reflect.DeepEqual(ext, test.ext) {
t.Errorf("[%s] Wrong value for ComplexExtension: got: %v want: %v\n", test.name, ext, test.ext)
}
}
}
func TestUnmarshalRepeatingNonRepeatedExtension(t *testing.T) {
// We may see multiple instances of the same extension in the wire
// format. For example, the proto compiler may encode custom options in
// this way. Here, we verify that we merge the extensions together.
tests := []struct {
name string
ext []*pb.ComplexExtension
}{
{
"two fields",
[]*pb.ComplexExtension{
{First: proto.Int32(7)},
{Second: proto.Int32(11)},
},
},
{
"repeated field",
[]*pb.ComplexExtension{
{Third: []int32{1000}},
{Third: []int32{2000}},
},
},
{
"two fields and repeated field",
[]*pb.ComplexExtension{
{Third: []int32{1000}},
{First: proto.Int32(9)},
{Second: proto.Int32(21)},
{Third: []int32{2000}},
},
},
}
for _, test := range tests {
var buf bytes.Buffer
var want pb.ComplexExtension
// Generate a serialized representation of a repeated extension
// by catenating bytes together.
for i, e := range test.ext {
// Merge to create the wanted proto.
proto.Merge(&want, e)
// serialize the message
msg := new(pb.OtherMessage)
err := proto.SetExtension(msg, pb.E_Complex, e)
if err != nil {
t.Fatalf("[%s] Error setting extension %d: %v", test.name, i, err)
}
b, err := proto.Marshal(msg)
if err != nil {
t.Fatalf("[%s] Error marshaling message %d: %v", test.name, i, err)
}
buf.Write(b)
}
// Unmarshal and read the merged proto.
msg2 := new(pb.OtherMessage)
err := proto.Unmarshal(buf.Bytes(), msg2)
if err != nil {
t.Fatalf("[%s] Error unmarshaling message: %v", test.name, err)
}
e, err := proto.GetExtension(msg2, pb.E_Complex)
if err != nil {
t.Fatalf("[%s] Error getting extension: %v", test.name, err)
}
ext := e.(*pb.ComplexExtension)
if ext == nil {
t.Fatalf("[%s] Invalid extension", test.name)
}
if !reflect.DeepEqual(*ext, want) {
t.Errorf("[%s] Wrong value for ComplexExtension: got: %s want: %s\n", test.name, ext, want)
}
}
}
func TestClearAllExtensions(t *testing.T) {
// unregistered extension
desc := &proto.ExtensionDesc{
ExtendedType: (*pb.MyMessage)(nil),
ExtensionType: (*bool)(nil),
Field: 101010100,
Name: "emptyextension",
Tag: "varint,0,opt",
}
m := &pb.MyMessage{}
if proto.HasExtension(m, desc) {
t.Errorf("proto.HasExtension(%s): got true, want false", proto.MarshalTextString(m))
}
if err := proto.SetExtension(m, desc, proto.Bool(true)); err != nil {
t.Errorf("proto.SetExtension(m, desc, true): got error %q, want nil", err)
}
if !proto.HasExtension(m, desc) {
t.Errorf("proto.HasExtension(%s): got false, want true", proto.MarshalTextString(m))
}
proto.ClearAllExtensions(m)
if proto.HasExtension(m, desc) {
t.Errorf("proto.HasExtension(%s): got true, want false", proto.MarshalTextString(m))
}
}
func TestMarshalRace(t *testing.T) {
// unregistered extension
desc := &proto.ExtensionDesc{
ExtendedType: (*pb.MyMessage)(nil),
ExtensionType: (*bool)(nil),
Field: 101010100,
Name: "emptyextension",
Tag: "varint,0,opt",
}
m := &pb.MyMessage{Count: proto.Int32(4)}
if err := proto.SetExtension(m, desc, proto.Bool(true)); err != nil {
t.Errorf("proto.SetExtension(m, desc, true): got error %q, want nil", err)
}
var g errgroup.Group
for n := 3; n > 0; n-- {
g.Go(func() error {
_, err := proto.Marshal(m)
return err
})
}
if err := g.Wait(); err != nil {
t.Fatal(err)
}
}

897
vendor/github.com/golang/protobuf/proto/lib.go generated vendored Normal file
View File

@ -0,0 +1,897 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
Package proto converts data structures to and from the wire format of
protocol buffers. It works in concert with the Go source code generated
for .proto files by the protocol compiler.
A summary of the properties of the protocol buffer interface
for a protocol buffer variable v:
- Names are turned from camel_case to CamelCase for export.
- There are no methods on v to set fields; just treat
them as structure fields.
- There are getters that return a field's value if set,
and return the field's default value if unset.
The getters work even if the receiver is a nil message.
- The zero value for a struct is its correct initialization state.
All desired fields must be set before marshaling.
- A Reset() method will restore a protobuf struct to its zero state.
- Non-repeated fields are pointers to the values; nil means unset.
That is, optional or required field int32 f becomes F *int32.
- Repeated fields are slices.
- Helper functions are available to aid the setting of fields.
msg.Foo = proto.String("hello") // set field
- Constants are defined to hold the default values of all fields that
have them. They have the form Default_StructName_FieldName.
Because the getter methods handle defaulted values,
direct use of these constants should be rare.
- Enums are given type names and maps from names to values.
Enum values are prefixed by the enclosing message's name, or by the
enum's type name if it is a top-level enum. Enum types have a String
method, and a Enum method to assist in message construction.
- Nested messages, groups and enums have type names prefixed with the name of
the surrounding message type.
- Extensions are given descriptor names that start with E_,
followed by an underscore-delimited list of the nested messages
that contain it (if any) followed by the CamelCased name of the
extension field itself. HasExtension, ClearExtension, GetExtension
and SetExtension are functions for manipulating extensions.
- Oneof field sets are given a single field in their message,
with distinguished wrapper types for each possible field value.
- Marshal and Unmarshal are functions to encode and decode the wire format.
When the .proto file specifies `syntax="proto3"`, there are some differences:
- Non-repeated fields of non-message type are values instead of pointers.
- Enum types do not get an Enum method.
The simplest way to describe this is to see an example.
Given file test.proto, containing
package example;
enum FOO { X = 17; }
message Test {
required string label = 1;
optional int32 type = 2 [default=77];
repeated int64 reps = 3;
optional group OptionalGroup = 4 {
required string RequiredField = 5;
}
oneof union {
int32 number = 6;
string name = 7;
}
}
The resulting file, test.pb.go, is:
package example
import proto "github.com/golang/protobuf/proto"
import math "math"
type FOO int32
const (
FOO_X FOO = 17
)
var FOO_name = map[int32]string{
17: "X",
}
var FOO_value = map[string]int32{
"X": 17,
}
func (x FOO) Enum() *FOO {
p := new(FOO)
*p = x
return p
}
func (x FOO) String() string {
return proto.EnumName(FOO_name, int32(x))
}
func (x *FOO) UnmarshalJSON(data []byte) error {
value, err := proto.UnmarshalJSONEnum(FOO_value, data)
if err != nil {
return err
}
*x = FOO(value)
return nil
}
type Test struct {
Label *string `protobuf:"bytes,1,req,name=label" json:"label,omitempty"`
Type *int32 `protobuf:"varint,2,opt,name=type,def=77" json:"type,omitempty"`
Reps []int64 `protobuf:"varint,3,rep,name=reps" json:"reps,omitempty"`
Optionalgroup *Test_OptionalGroup `protobuf:"group,4,opt,name=OptionalGroup" json:"optionalgroup,omitempty"`
// Types that are valid to be assigned to Union:
// *Test_Number
// *Test_Name
Union isTest_Union `protobuf_oneof:"union"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Test) Reset() { *m = Test{} }
func (m *Test) String() string { return proto.CompactTextString(m) }
func (*Test) ProtoMessage() {}
type isTest_Union interface {
isTest_Union()
}
type Test_Number struct {
Number int32 `protobuf:"varint,6,opt,name=number"`
}
type Test_Name struct {
Name string `protobuf:"bytes,7,opt,name=name"`
}
func (*Test_Number) isTest_Union() {}
func (*Test_Name) isTest_Union() {}
func (m *Test) GetUnion() isTest_Union {
if m != nil {
return m.Union
}
return nil
}
const Default_Test_Type int32 = 77
func (m *Test) GetLabel() string {
if m != nil && m.Label != nil {
return *m.Label
}
return ""
}
func (m *Test) GetType() int32 {
if m != nil && m.Type != nil {
return *m.Type
}
return Default_Test_Type
}
func (m *Test) GetOptionalgroup() *Test_OptionalGroup {
if m != nil {
return m.Optionalgroup
}
return nil
}
type Test_OptionalGroup struct {
RequiredField *string `protobuf:"bytes,5,req" json:"RequiredField,omitempty"`
}
func (m *Test_OptionalGroup) Reset() { *m = Test_OptionalGroup{} }
func (m *Test_OptionalGroup) String() string { return proto.CompactTextString(m) }
func (m *Test_OptionalGroup) GetRequiredField() string {
if m != nil && m.RequiredField != nil {
return *m.RequiredField
}
return ""
}
func (m *Test) GetNumber() int32 {
if x, ok := m.GetUnion().(*Test_Number); ok {
return x.Number
}
return 0
}
func (m *Test) GetName() string {
if x, ok := m.GetUnion().(*Test_Name); ok {
return x.Name
}
return ""
}
func init() {
proto.RegisterEnum("example.FOO", FOO_name, FOO_value)
}
To create and play with a Test object:
package main
import (
"log"
"github.com/golang/protobuf/proto"
pb "./example.pb"
)
func main() {
test := &pb.Test{
Label: proto.String("hello"),
Type: proto.Int32(17),
Reps: []int64{1, 2, 3},
Optionalgroup: &pb.Test_OptionalGroup{
RequiredField: proto.String("good bye"),
},
Union: &pb.Test_Name{"fred"},
}
data, err := proto.Marshal(test)
if err != nil {
log.Fatal("marshaling error: ", err)
}
newTest := &pb.Test{}
err = proto.Unmarshal(data, newTest)
if err != nil {
log.Fatal("unmarshaling error: ", err)
}
// Now test and newTest contain the same data.
if test.GetLabel() != newTest.GetLabel() {
log.Fatalf("data mismatch %q != %q", test.GetLabel(), newTest.GetLabel())
}
// Use a type switch to determine which oneof was set.
switch u := test.Union.(type) {
case *pb.Test_Number: // u.Number contains the number.
case *pb.Test_Name: // u.Name contains the string.
}
// etc.
}
*/
package proto
import (
"encoding/json"
"fmt"
"log"
"reflect"
"sort"
"strconv"
"sync"
)
// Message is implemented by generated protocol buffer messages.
type Message interface {
Reset()
String() string
ProtoMessage()
}
// Stats records allocation details about the protocol buffer encoders
// and decoders. Useful for tuning the library itself.
type Stats struct {
Emalloc uint64 // mallocs in encode
Dmalloc uint64 // mallocs in decode
Encode uint64 // number of encodes
Decode uint64 // number of decodes
Chit uint64 // number of cache hits
Cmiss uint64 // number of cache misses
Size uint64 // number of sizes
}
// Set to true to enable stats collection.
const collectStats = false
var stats Stats
// GetStats returns a copy of the global Stats structure.
func GetStats() Stats { return stats }
// A Buffer is a buffer manager for marshaling and unmarshaling
// protocol buffers. It may be reused between invocations to
// reduce memory usage. It is not necessary to use a Buffer;
// the global functions Marshal and Unmarshal create a
// temporary Buffer and are fine for most applications.
type Buffer struct {
buf []byte // encode/decode byte stream
index int // read point
// pools of basic types to amortize allocation.
bools []bool
uint32s []uint32
uint64s []uint64
// extra pools, only used with pointer_reflect.go
int32s []int32
int64s []int64
float32s []float32
float64s []float64
}
// NewBuffer allocates a new Buffer and initializes its internal data to
// the contents of the argument slice.
func NewBuffer(e []byte) *Buffer {
return &Buffer{buf: e}
}
// Reset resets the Buffer, ready for marshaling a new protocol buffer.
func (p *Buffer) Reset() {
p.buf = p.buf[0:0] // for reading/writing
p.index = 0 // for reading
}
// SetBuf replaces the internal buffer with the slice,
// ready for unmarshaling the contents of the slice.
func (p *Buffer) SetBuf(s []byte) {
p.buf = s
p.index = 0
}
// Bytes returns the contents of the Buffer.
func (p *Buffer) Bytes() []byte { return p.buf }
/*
* Helper routines for simplifying the creation of optional fields of basic type.
*/
// Bool is a helper routine that allocates a new bool value
// to store v and returns a pointer to it.
func Bool(v bool) *bool {
return &v
}
// Int32 is a helper routine that allocates a new int32 value
// to store v and returns a pointer to it.
func Int32(v int32) *int32 {
return &v
}
// Int is a helper routine that allocates a new int32 value
// to store v and returns a pointer to it, but unlike Int32
// its argument value is an int.
func Int(v int) *int32 {
p := new(int32)
*p = int32(v)
return p
}
// Int64 is a helper routine that allocates a new int64 value
// to store v and returns a pointer to it.
func Int64(v int64) *int64 {
return &v
}
// Float32 is a helper routine that allocates a new float32 value
// to store v and returns a pointer to it.
func Float32(v float32) *float32 {
return &v
}
// Float64 is a helper routine that allocates a new float64 value
// to store v and returns a pointer to it.
func Float64(v float64) *float64 {
return &v
}
// Uint32 is a helper routine that allocates a new uint32 value
// to store v and returns a pointer to it.
func Uint32(v uint32) *uint32 {
return &v
}
// Uint64 is a helper routine that allocates a new uint64 value
// to store v and returns a pointer to it.
func Uint64(v uint64) *uint64 {
return &v
}
// String is a helper routine that allocates a new string value
// to store v and returns a pointer to it.
func String(v string) *string {
return &v
}
// EnumName is a helper function to simplify printing protocol buffer enums
// by name. Given an enum map and a value, it returns a useful string.
func EnumName(m map[int32]string, v int32) string {
s, ok := m[v]
if ok {
return s
}
return strconv.Itoa(int(v))
}
// UnmarshalJSONEnum is a helper function to simplify recovering enum int values
// from their JSON-encoded representation. Given a map from the enum's symbolic
// names to its int values, and a byte buffer containing the JSON-encoded
// value, it returns an int32 that can be cast to the enum type by the caller.
//
// The function can deal with both JSON representations, numeric and symbolic.
func UnmarshalJSONEnum(m map[string]int32, data []byte, enumName string) (int32, error) {
if data[0] == '"' {
// New style: enums are strings.
var repr string
if err := json.Unmarshal(data, &repr); err != nil {
return -1, err
}
val, ok := m[repr]
if !ok {
return 0, fmt.Errorf("unrecognized enum %s value %q", enumName, repr)
}
return val, nil
}
// Old style: enums are ints.
var val int32
if err := json.Unmarshal(data, &val); err != nil {
return 0, fmt.Errorf("cannot unmarshal %#q into enum %s", data, enumName)
}
return val, nil
}
// DebugPrint dumps the encoded data in b in a debugging format with a header
// including the string s. Used in testing but made available for general debugging.
func (p *Buffer) DebugPrint(s string, b []byte) {
var u uint64
obuf := p.buf
index := p.index
p.buf = b
p.index = 0
depth := 0
fmt.Printf("\n--- %s ---\n", s)
out:
for {
for i := 0; i < depth; i++ {
fmt.Print(" ")
}
index := p.index
if index == len(p.buf) {
break
}
op, err := p.DecodeVarint()
if err != nil {
fmt.Printf("%3d: fetching op err %v\n", index, err)
break out
}
tag := op >> 3
wire := op & 7
switch wire {
default:
fmt.Printf("%3d: t=%3d unknown wire=%d\n",
index, tag, wire)
break out
case WireBytes:
var r []byte
r, err = p.DecodeRawBytes(false)
if err != nil {
break out
}
fmt.Printf("%3d: t=%3d bytes [%d]", index, tag, len(r))
if len(r) <= 6 {
for i := 0; i < len(r); i++ {
fmt.Printf(" %.2x", r[i])
}
} else {
for i := 0; i < 3; i++ {
fmt.Printf(" %.2x", r[i])
}
fmt.Printf(" ..")
for i := len(r) - 3; i < len(r); i++ {
fmt.Printf(" %.2x", r[i])
}
}
fmt.Printf("\n")
case WireFixed32:
u, err = p.DecodeFixed32()
if err != nil {
fmt.Printf("%3d: t=%3d fix32 err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d fix32 %d\n", index, tag, u)
case WireFixed64:
u, err = p.DecodeFixed64()
if err != nil {
fmt.Printf("%3d: t=%3d fix64 err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d fix64 %d\n", index, tag, u)
case WireVarint:
u, err = p.DecodeVarint()
if err != nil {
fmt.Printf("%3d: t=%3d varint err %v\n", index, tag, err)
break out
}
fmt.Printf("%3d: t=%3d varint %d\n", index, tag, u)
case WireStartGroup:
fmt.Printf("%3d: t=%3d start\n", index, tag)
depth++
case WireEndGroup:
depth--
fmt.Printf("%3d: t=%3d end\n", index, tag)
}
}
if depth != 0 {
fmt.Printf("%3d: start-end not balanced %d\n", p.index, depth)
}
fmt.Printf("\n")
p.buf = obuf
p.index = index
}
// SetDefaults sets unset protocol buffer fields to their default values.
// It only modifies fields that are both unset and have defined defaults.
// It recursively sets default values in any non-nil sub-messages.
func SetDefaults(pb Message) {
setDefaults(reflect.ValueOf(pb), true, false)
}
// v is a pointer to a struct.
func setDefaults(v reflect.Value, recur, zeros bool) {
v = v.Elem()
defaultMu.RLock()
dm, ok := defaults[v.Type()]
defaultMu.RUnlock()
if !ok {
dm = buildDefaultMessage(v.Type())
defaultMu.Lock()
defaults[v.Type()] = dm
defaultMu.Unlock()
}
for _, sf := range dm.scalars {
f := v.Field(sf.index)
if !f.IsNil() {
// field already set
continue
}
dv := sf.value
if dv == nil && !zeros {
// no explicit default, and don't want to set zeros
continue
}
fptr := f.Addr().Interface() // **T
// TODO: Consider batching the allocations we do here.
switch sf.kind {
case reflect.Bool:
b := new(bool)
if dv != nil {
*b = dv.(bool)
}
*(fptr.(**bool)) = b
case reflect.Float32:
f := new(float32)
if dv != nil {
*f = dv.(float32)
}
*(fptr.(**float32)) = f
case reflect.Float64:
f := new(float64)
if dv != nil {
*f = dv.(float64)
}
*(fptr.(**float64)) = f
case reflect.Int32:
// might be an enum
if ft := f.Type(); ft != int32PtrType {
// enum
f.Set(reflect.New(ft.Elem()))
if dv != nil {
f.Elem().SetInt(int64(dv.(int32)))
}
} else {
// int32 field
i := new(int32)
if dv != nil {
*i = dv.(int32)
}
*(fptr.(**int32)) = i
}
case reflect.Int64:
i := new(int64)
if dv != nil {
*i = dv.(int64)
}
*(fptr.(**int64)) = i
case reflect.String:
s := new(string)
if dv != nil {
*s = dv.(string)
}
*(fptr.(**string)) = s
case reflect.Uint8:
// exceptional case: []byte
var b []byte
if dv != nil {
db := dv.([]byte)
b = make([]byte, len(db))
copy(b, db)
} else {
b = []byte{}
}
*(fptr.(*[]byte)) = b
case reflect.Uint32:
u := new(uint32)
if dv != nil {
*u = dv.(uint32)
}
*(fptr.(**uint32)) = u
case reflect.Uint64:
u := new(uint64)
if dv != nil {
*u = dv.(uint64)
}
*(fptr.(**uint64)) = u
default:
log.Printf("proto: can't set default for field %v (sf.kind=%v)", f, sf.kind)
}
}
for _, ni := range dm.nested {
f := v.Field(ni)
// f is *T or []*T or map[T]*T
switch f.Kind() {
case reflect.Ptr:
if f.IsNil() {
continue
}
setDefaults(f, recur, zeros)
case reflect.Slice:
for i := 0; i < f.Len(); i++ {
e := f.Index(i)
if e.IsNil() {
continue
}
setDefaults(e, recur, zeros)
}
case reflect.Map:
for _, k := range f.MapKeys() {
e := f.MapIndex(k)
if e.IsNil() {
continue
}
setDefaults(e, recur, zeros)
}
}
}
}
var (
// defaults maps a protocol buffer struct type to a slice of the fields,
// with its scalar fields set to their proto-declared non-zero default values.
defaultMu sync.RWMutex
defaults = make(map[reflect.Type]defaultMessage)
int32PtrType = reflect.TypeOf((*int32)(nil))
)
// defaultMessage represents information about the default values of a message.
type defaultMessage struct {
scalars []scalarField
nested []int // struct field index of nested messages
}
type scalarField struct {
index int // struct field index
kind reflect.Kind // element type (the T in *T or []T)
value interface{} // the proto-declared default value, or nil
}
// t is a struct type.
func buildDefaultMessage(t reflect.Type) (dm defaultMessage) {
sprop := GetProperties(t)
for _, prop := range sprop.Prop {
fi, ok := sprop.decoderTags.get(prop.Tag)
if !ok {
// XXX_unrecognized
continue
}
ft := t.Field(fi).Type
sf, nested, err := fieldDefault(ft, prop)
switch {
case err != nil:
log.Print(err)
case nested:
dm.nested = append(dm.nested, fi)
case sf != nil:
sf.index = fi
dm.scalars = append(dm.scalars, *sf)
}
}
return dm
}
// fieldDefault returns the scalarField for field type ft.
// sf will be nil if the field can not have a default.
// nestedMessage will be true if this is a nested message.
// Note that sf.index is not set on return.
func fieldDefault(ft reflect.Type, prop *Properties) (sf *scalarField, nestedMessage bool, err error) {
var canHaveDefault bool
switch ft.Kind() {
case reflect.Ptr:
if ft.Elem().Kind() == reflect.Struct {
nestedMessage = true
} else {
canHaveDefault = true // proto2 scalar field
}
case reflect.Slice:
switch ft.Elem().Kind() {
case reflect.Ptr:
nestedMessage = true // repeated message
case reflect.Uint8:
canHaveDefault = true // bytes field
}
case reflect.Map:
if ft.Elem().Kind() == reflect.Ptr {
nestedMessage = true // map with message values
}
}
if !canHaveDefault {
if nestedMessage {
return nil, true, nil
}
return nil, false, nil
}
// We now know that ft is a pointer or slice.
sf = &scalarField{kind: ft.Elem().Kind()}
// scalar fields without defaults
if !prop.HasDefault {
return sf, false, nil
}
// a scalar field: either *T or []byte
switch ft.Elem().Kind() {
case reflect.Bool:
x, err := strconv.ParseBool(prop.Default)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default bool %q: %v", prop.Default, err)
}
sf.value = x
case reflect.Float32:
x, err := strconv.ParseFloat(prop.Default, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default float32 %q: %v", prop.Default, err)
}
sf.value = float32(x)
case reflect.Float64:
x, err := strconv.ParseFloat(prop.Default, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default float64 %q: %v", prop.Default, err)
}
sf.value = x
case reflect.Int32:
x, err := strconv.ParseInt(prop.Default, 10, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default int32 %q: %v", prop.Default, err)
}
sf.value = int32(x)
case reflect.Int64:
x, err := strconv.ParseInt(prop.Default, 10, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default int64 %q: %v", prop.Default, err)
}
sf.value = x
case reflect.String:
sf.value = prop.Default
case reflect.Uint8:
// []byte (not *uint8)
sf.value = []byte(prop.Default)
case reflect.Uint32:
x, err := strconv.ParseUint(prop.Default, 10, 32)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default uint32 %q: %v", prop.Default, err)
}
sf.value = uint32(x)
case reflect.Uint64:
x, err := strconv.ParseUint(prop.Default, 10, 64)
if err != nil {
return nil, false, fmt.Errorf("proto: bad default uint64 %q: %v", prop.Default, err)
}
sf.value = x
default:
return nil, false, fmt.Errorf("proto: unhandled def kind %v", ft.Elem().Kind())
}
return sf, false, nil
}
// Map fields may have key types of non-float scalars, strings and enums.
// The easiest way to sort them in some deterministic order is to use fmt.
// If this turns out to be inefficient we can always consider other options,
// such as doing a Schwartzian transform.
func mapKeys(vs []reflect.Value) sort.Interface {
s := mapKeySorter{
vs: vs,
// default Less function: textual comparison
less: func(a, b reflect.Value) bool {
return fmt.Sprint(a.Interface()) < fmt.Sprint(b.Interface())
},
}
// Type specialization per https://developers.google.com/protocol-buffers/docs/proto#maps;
// numeric keys are sorted numerically.
if len(vs) == 0 {
return s
}
switch vs[0].Kind() {
case reflect.Int32, reflect.Int64:
s.less = func(a, b reflect.Value) bool { return a.Int() < b.Int() }
case reflect.Uint32, reflect.Uint64:
s.less = func(a, b reflect.Value) bool { return a.Uint() < b.Uint() }
}
return s
}
type mapKeySorter struct {
vs []reflect.Value
less func(a, b reflect.Value) bool
}
func (s mapKeySorter) Len() int { return len(s.vs) }
func (s mapKeySorter) Swap(i, j int) { s.vs[i], s.vs[j] = s.vs[j], s.vs[i] }
func (s mapKeySorter) Less(i, j int) bool {
return s.less(s.vs[i], s.vs[j])
}
// isProto3Zero reports whether v is a zero proto3 value.
func isProto3Zero(v reflect.Value) bool {
switch v.Kind() {
case reflect.Bool:
return !v.Bool()
case reflect.Int32, reflect.Int64:
return v.Int() == 0
case reflect.Uint32, reflect.Uint64:
return v.Uint() == 0
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.String:
return v.String() == ""
}
return false
}
// ProtoPackageIsVersion2 is referenced from generated protocol buffer files
// to assert that that code is compatible with this version of the proto package.
const ProtoPackageIsVersion2 = true
// ProtoPackageIsVersion1 is referenced from generated protocol buffer files
// to assert that that code is compatible with this version of the proto package.
const ProtoPackageIsVersion1 = true

46
vendor/github.com/golang/protobuf/proto/map_test.go generated vendored Normal file
View File

@ -0,0 +1,46 @@
package proto_test
import (
"fmt"
"testing"
"github.com/golang/protobuf/proto"
ppb "github.com/golang/protobuf/proto/proto3_proto"
)
func marshalled() []byte {
m := &ppb.IntMaps{}
for i := 0; i < 1000; i++ {
m.Maps = append(m.Maps, &ppb.IntMap{
Rtt: map[int32]int32{1: 2},
})
}
b, err := proto.Marshal(m)
if err != nil {
panic(fmt.Sprintf("Can't marshal %+v: %v", m, err))
}
return b
}
func BenchmarkConcurrentMapUnmarshal(b *testing.B) {
in := marshalled()
b.RunParallel(func(pb *testing.PB) {
for pb.Next() {
var out ppb.IntMaps
if err := proto.Unmarshal(in, &out); err != nil {
b.Errorf("Can't unmarshal ppb.IntMaps: %v", err)
}
}
})
}
func BenchmarkSequentialMapUnmarshal(b *testing.B) {
in := marshalled()
b.ResetTimer()
for i := 0; i < b.N; i++ {
var out ppb.IntMaps
if err := proto.Unmarshal(in, &out); err != nil {
b.Errorf("Can't unmarshal ppb.IntMaps: %v", err)
}
}
}

311
vendor/github.com/golang/protobuf/proto/message_set.go generated vendored Normal file
View File

@ -0,0 +1,311 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Support for message sets.
*/
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"reflect"
"sort"
)
// errNoMessageTypeID occurs when a protocol buffer does not have a message type ID.
// A message type ID is required for storing a protocol buffer in a message set.
var errNoMessageTypeID = errors.New("proto does not have a message type ID")
// The first two types (_MessageSet_Item and messageSet)
// model what the protocol compiler produces for the following protocol message:
// message MessageSet {
// repeated group Item = 1 {
// required int32 type_id = 2;
// required string message = 3;
// };
// }
// That is the MessageSet wire format. We can't use a proto to generate these
// because that would introduce a circular dependency between it and this package.
type _MessageSet_Item struct {
TypeId *int32 `protobuf:"varint,2,req,name=type_id"`
Message []byte `protobuf:"bytes,3,req,name=message"`
}
type messageSet struct {
Item []*_MessageSet_Item `protobuf:"group,1,rep"`
XXX_unrecognized []byte
// TODO: caching?
}
// Make sure messageSet is a Message.
var _ Message = (*messageSet)(nil)
// messageTypeIder is an interface satisfied by a protocol buffer type
// that may be stored in a MessageSet.
type messageTypeIder interface {
MessageTypeId() int32
}
func (ms *messageSet) find(pb Message) *_MessageSet_Item {
mti, ok := pb.(messageTypeIder)
if !ok {
return nil
}
id := mti.MessageTypeId()
for _, item := range ms.Item {
if *item.TypeId == id {
return item
}
}
return nil
}
func (ms *messageSet) Has(pb Message) bool {
if ms.find(pb) != nil {
return true
}
return false
}
func (ms *messageSet) Unmarshal(pb Message) error {
if item := ms.find(pb); item != nil {
return Unmarshal(item.Message, pb)
}
if _, ok := pb.(messageTypeIder); !ok {
return errNoMessageTypeID
}
return nil // TODO: return error instead?
}
func (ms *messageSet) Marshal(pb Message) error {
msg, err := Marshal(pb)
if err != nil {
return err
}
if item := ms.find(pb); item != nil {
// reuse existing item
item.Message = msg
return nil
}
mti, ok := pb.(messageTypeIder)
if !ok {
return errNoMessageTypeID
}
mtid := mti.MessageTypeId()
ms.Item = append(ms.Item, &_MessageSet_Item{
TypeId: &mtid,
Message: msg,
})
return nil
}
func (ms *messageSet) Reset() { *ms = messageSet{} }
func (ms *messageSet) String() string { return CompactTextString(ms) }
func (*messageSet) ProtoMessage() {}
// Support for the message_set_wire_format message option.
func skipVarint(buf []byte) []byte {
i := 0
for ; buf[i]&0x80 != 0; i++ {
}
return buf[i+1:]
}
// MarshalMessageSet encodes the extension map represented by m in the message set wire format.
// It is called by generated Marshal methods on protocol buffer messages with the message_set_wire_format option.
func MarshalMessageSet(exts interface{}) ([]byte, error) {
var m map[int32]Extension
switch exts := exts.(type) {
case *XXX_InternalExtensions:
if err := encodeExtensions(exts); err != nil {
return nil, err
}
m, _ = exts.extensionsRead()
case map[int32]Extension:
if err := encodeExtensionsMap(exts); err != nil {
return nil, err
}
m = exts
default:
return nil, errors.New("proto: not an extension map")
}
// Sort extension IDs to provide a deterministic encoding.
// See also enc_map in encode.go.
ids := make([]int, 0, len(m))
for id := range m {
ids = append(ids, int(id))
}
sort.Ints(ids)
ms := &messageSet{Item: make([]*_MessageSet_Item, 0, len(m))}
for _, id := range ids {
e := m[int32(id)]
// Remove the wire type and field number varint, as well as the length varint.
msg := skipVarint(skipVarint(e.enc))
ms.Item = append(ms.Item, &_MessageSet_Item{
TypeId: Int32(int32(id)),
Message: msg,
})
}
return Marshal(ms)
}
// UnmarshalMessageSet decodes the extension map encoded in buf in the message set wire format.
// It is called by generated Unmarshal methods on protocol buffer messages with the message_set_wire_format option.
func UnmarshalMessageSet(buf []byte, exts interface{}) error {
var m map[int32]Extension
switch exts := exts.(type) {
case *XXX_InternalExtensions:
m = exts.extensionsWrite()
case map[int32]Extension:
m = exts
default:
return errors.New("proto: not an extension map")
}
ms := new(messageSet)
if err := Unmarshal(buf, ms); err != nil {
return err
}
for _, item := range ms.Item {
id := *item.TypeId
msg := item.Message
// Restore wire type and field number varint, plus length varint.
// Be careful to preserve duplicate items.
b := EncodeVarint(uint64(id)<<3 | WireBytes)
if ext, ok := m[id]; ok {
// Existing data; rip off the tag and length varint
// so we join the new data correctly.
// We can assume that ext.enc is set because we are unmarshaling.
o := ext.enc[len(b):] // skip wire type and field number
_, n := DecodeVarint(o) // calculate length of length varint
o = o[n:] // skip length varint
msg = append(o, msg...) // join old data and new data
}
b = append(b, EncodeVarint(uint64(len(msg)))...)
b = append(b, msg...)
m[id] = Extension{enc: b}
}
return nil
}
// MarshalMessageSetJSON encodes the extension map represented by m in JSON format.
// It is called by generated MarshalJSON methods on protocol buffer messages with the message_set_wire_format option.
func MarshalMessageSetJSON(exts interface{}) ([]byte, error) {
var m map[int32]Extension
switch exts := exts.(type) {
case *XXX_InternalExtensions:
m, _ = exts.extensionsRead()
case map[int32]Extension:
m = exts
default:
return nil, errors.New("proto: not an extension map")
}
var b bytes.Buffer
b.WriteByte('{')
// Process the map in key order for deterministic output.
ids := make([]int32, 0, len(m))
for id := range m {
ids = append(ids, id)
}
sort.Sort(int32Slice(ids)) // int32Slice defined in text.go
for i, id := range ids {
ext := m[id]
if i > 0 {
b.WriteByte(',')
}
msd, ok := messageSetMap[id]
if !ok {
// Unknown type; we can't render it, so skip it.
continue
}
fmt.Fprintf(&b, `"[%s]":`, msd.name)
x := ext.value
if x == nil {
x = reflect.New(msd.t.Elem()).Interface()
if err := Unmarshal(ext.enc, x.(Message)); err != nil {
return nil, err
}
}
d, err := json.Marshal(x)
if err != nil {
return nil, err
}
b.Write(d)
}
b.WriteByte('}')
return b.Bytes(), nil
}
// UnmarshalMessageSetJSON decodes the extension map encoded in buf in JSON format.
// It is called by generated UnmarshalJSON methods on protocol buffer messages with the message_set_wire_format option.
func UnmarshalMessageSetJSON(buf []byte, exts interface{}) error {
// Common-case fast path.
if len(buf) == 0 || bytes.Equal(buf, []byte("{}")) {
return nil
}
// This is fairly tricky, and it's not clear that it is needed.
return errors.New("TODO: UnmarshalMessageSetJSON not yet implemented")
}
// A global registry of types that can be used in a MessageSet.
var messageSetMap = make(map[int32]messageSetDesc)
type messageSetDesc struct {
t reflect.Type // pointer to struct
name string
}
// RegisterMessageSetType is called from the generated code.
func RegisterMessageSetType(m Message, fieldNum int32, name string) {
messageSetMap[fieldNum] = messageSetDesc{
t: reflect.TypeOf(m),
name: name,
}
}

View File

@ -0,0 +1,66 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2014 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
import (
"bytes"
"testing"
)
func TestUnmarshalMessageSetWithDuplicate(t *testing.T) {
// Check that a repeated message set entry will be concatenated.
in := &messageSet{
Item: []*_MessageSet_Item{
{TypeId: Int32(12345), Message: []byte("hoo")},
{TypeId: Int32(12345), Message: []byte("hah")},
},
}
b, err := Marshal(in)
if err != nil {
t.Fatalf("Marshal: %v", err)
}
t.Logf("Marshaled bytes: %q", b)
var extensions XXX_InternalExtensions
if err := UnmarshalMessageSet(b, &extensions); err != nil {
t.Fatalf("UnmarshalMessageSet: %v", err)
}
ext, ok := extensions.p.extensionMap[12345]
if !ok {
t.Fatalf("Didn't retrieve extension 12345; map is %v", extensions.p.extensionMap)
}
// Skip wire type/field number and length varints.
got := skipVarint(skipVarint(ext.enc))
if want := []byte("hoohah"); !bytes.Equal(got, want) {
t.Errorf("Combined extension is %q, want %q", got, want)
}
}

View File

@ -0,0 +1,484 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// +build appengine js
// This file contains an implementation of proto field accesses using package reflect.
// It is slower than the code in pointer_unsafe.go but it avoids package unsafe and can
// be used on App Engine.
package proto
import (
"math"
"reflect"
)
// A structPointer is a pointer to a struct.
type structPointer struct {
v reflect.Value
}
// toStructPointer returns a structPointer equivalent to the given reflect value.
// The reflect value must itself be a pointer to a struct.
func toStructPointer(v reflect.Value) structPointer {
return structPointer{v}
}
// IsNil reports whether p is nil.
func structPointer_IsNil(p structPointer) bool {
return p.v.IsNil()
}
// Interface returns the struct pointer as an interface value.
func structPointer_Interface(p structPointer, _ reflect.Type) interface{} {
return p.v.Interface()
}
// A field identifies a field in a struct, accessible from a structPointer.
// In this implementation, a field is identified by the sequence of field indices
// passed to reflect's FieldByIndex.
type field []int
// toField returns a field equivalent to the given reflect field.
func toField(f *reflect.StructField) field {
return f.Index
}
// invalidField is an invalid field identifier.
var invalidField = field(nil)
// IsValid reports whether the field identifier is valid.
func (f field) IsValid() bool { return f != nil }
// field returns the given field in the struct as a reflect value.
func structPointer_field(p structPointer, f field) reflect.Value {
// Special case: an extension map entry with a value of type T
// passes a *T to the struct-handling code with a zero field,
// expecting that it will be treated as equivalent to *struct{ X T },
// which has the same memory layout. We have to handle that case
// specially, because reflect will panic if we call FieldByIndex on a
// non-struct.
if f == nil {
return p.v.Elem()
}
return p.v.Elem().FieldByIndex(f)
}
// ifield returns the given field in the struct as an interface value.
func structPointer_ifield(p structPointer, f field) interface{} {
return structPointer_field(p, f).Addr().Interface()
}
// Bytes returns the address of a []byte field in the struct.
func structPointer_Bytes(p structPointer, f field) *[]byte {
return structPointer_ifield(p, f).(*[]byte)
}
// BytesSlice returns the address of a [][]byte field in the struct.
func structPointer_BytesSlice(p structPointer, f field) *[][]byte {
return structPointer_ifield(p, f).(*[][]byte)
}
// Bool returns the address of a *bool field in the struct.
func structPointer_Bool(p structPointer, f field) **bool {
return structPointer_ifield(p, f).(**bool)
}
// BoolVal returns the address of a bool field in the struct.
func structPointer_BoolVal(p structPointer, f field) *bool {
return structPointer_ifield(p, f).(*bool)
}
// BoolSlice returns the address of a []bool field in the struct.
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
return structPointer_ifield(p, f).(*[]bool)
}
// String returns the address of a *string field in the struct.
func structPointer_String(p structPointer, f field) **string {
return structPointer_ifield(p, f).(**string)
}
// StringVal returns the address of a string field in the struct.
func structPointer_StringVal(p structPointer, f field) *string {
return structPointer_ifield(p, f).(*string)
}
// StringSlice returns the address of a []string field in the struct.
func structPointer_StringSlice(p structPointer, f field) *[]string {
return structPointer_ifield(p, f).(*[]string)
}
// Extensions returns the address of an extension map field in the struct.
func structPointer_Extensions(p structPointer, f field) *XXX_InternalExtensions {
return structPointer_ifield(p, f).(*XXX_InternalExtensions)
}
// ExtMap returns the address of an extension map field in the struct.
func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
return structPointer_ifield(p, f).(*map[int32]Extension)
}
// NewAt returns the reflect.Value for a pointer to a field in the struct.
func structPointer_NewAt(p structPointer, f field, typ reflect.Type) reflect.Value {
return structPointer_field(p, f).Addr()
}
// SetStructPointer writes a *struct field in the struct.
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
structPointer_field(p, f).Set(q.v)
}
// GetStructPointer reads a *struct field in the struct.
func structPointer_GetStructPointer(p structPointer, f field) structPointer {
return structPointer{structPointer_field(p, f)}
}
// StructPointerSlice the address of a []*struct field in the struct.
func structPointer_StructPointerSlice(p structPointer, f field) structPointerSlice {
return structPointerSlice{structPointer_field(p, f)}
}
// A structPointerSlice represents the address of a slice of pointers to structs
// (themselves messages or groups). That is, v.Type() is *[]*struct{...}.
type structPointerSlice struct {
v reflect.Value
}
func (p structPointerSlice) Len() int { return p.v.Len() }
func (p structPointerSlice) Index(i int) structPointer { return structPointer{p.v.Index(i)} }
func (p structPointerSlice) Append(q structPointer) {
p.v.Set(reflect.Append(p.v, q.v))
}
var (
int32Type = reflect.TypeOf(int32(0))
uint32Type = reflect.TypeOf(uint32(0))
float32Type = reflect.TypeOf(float32(0))
int64Type = reflect.TypeOf(int64(0))
uint64Type = reflect.TypeOf(uint64(0))
float64Type = reflect.TypeOf(float64(0))
)
// A word32 represents a field of type *int32, *uint32, *float32, or *enum.
// That is, v.Type() is *int32, *uint32, *float32, or *enum and v is assignable.
type word32 struct {
v reflect.Value
}
// IsNil reports whether p is nil.
func word32_IsNil(p word32) bool {
return p.v.IsNil()
}
// Set sets p to point at a newly allocated word with bits set to x.
func word32_Set(p word32, o *Buffer, x uint32) {
t := p.v.Type().Elem()
switch t {
case int32Type:
if len(o.int32s) == 0 {
o.int32s = make([]int32, uint32PoolSize)
}
o.int32s[0] = int32(x)
p.v.Set(reflect.ValueOf(&o.int32s[0]))
o.int32s = o.int32s[1:]
return
case uint32Type:
if len(o.uint32s) == 0 {
o.uint32s = make([]uint32, uint32PoolSize)
}
o.uint32s[0] = x
p.v.Set(reflect.ValueOf(&o.uint32s[0]))
o.uint32s = o.uint32s[1:]
return
case float32Type:
if len(o.float32s) == 0 {
o.float32s = make([]float32, uint32PoolSize)
}
o.float32s[0] = math.Float32frombits(x)
p.v.Set(reflect.ValueOf(&o.float32s[0]))
o.float32s = o.float32s[1:]
return
}
// must be enum
p.v.Set(reflect.New(t))
p.v.Elem().SetInt(int64(int32(x)))
}
// Get gets the bits pointed at by p, as a uint32.
func word32_Get(p word32) uint32 {
elem := p.v.Elem()
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32 returns a reference to a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32(p structPointer, f field) word32 {
return word32{structPointer_field(p, f)}
}
// A word32Val represents a field of type int32, uint32, float32, or enum.
// That is, v.Type() is int32, uint32, float32, or enum and v is assignable.
type word32Val struct {
v reflect.Value
}
// Set sets *p to x.
func word32Val_Set(p word32Val, x uint32) {
switch p.v.Type() {
case int32Type:
p.v.SetInt(int64(x))
return
case uint32Type:
p.v.SetUint(uint64(x))
return
case float32Type:
p.v.SetFloat(float64(math.Float32frombits(x)))
return
}
// must be enum
p.v.SetInt(int64(int32(x)))
}
// Get gets the bits pointed at by p, as a uint32.
func word32Val_Get(p word32Val) uint32 {
elem := p.v
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32Val returns a reference to a int32, uint32, float32, or enum field in the struct.
func structPointer_Word32Val(p structPointer, f field) word32Val {
return word32Val{structPointer_field(p, f)}
}
// A word32Slice is a slice of 32-bit values.
// That is, v.Type() is []int32, []uint32, []float32, or []enum.
type word32Slice struct {
v reflect.Value
}
func (p word32Slice) Append(x uint32) {
n, m := p.v.Len(), p.v.Cap()
if n < m {
p.v.SetLen(n + 1)
} else {
t := p.v.Type().Elem()
p.v.Set(reflect.Append(p.v, reflect.Zero(t)))
}
elem := p.v.Index(n)
switch elem.Kind() {
case reflect.Int32:
elem.SetInt(int64(int32(x)))
case reflect.Uint32:
elem.SetUint(uint64(x))
case reflect.Float32:
elem.SetFloat(float64(math.Float32frombits(x)))
}
}
func (p word32Slice) Len() int {
return p.v.Len()
}
func (p word32Slice) Index(i int) uint32 {
elem := p.v.Index(i)
switch elem.Kind() {
case reflect.Int32:
return uint32(elem.Int())
case reflect.Uint32:
return uint32(elem.Uint())
case reflect.Float32:
return math.Float32bits(float32(elem.Float()))
}
panic("unreachable")
}
// Word32Slice returns a reference to a []int32, []uint32, []float32, or []enum field in the struct.
func structPointer_Word32Slice(p structPointer, f field) word32Slice {
return word32Slice{structPointer_field(p, f)}
}
// word64 is like word32 but for 64-bit values.
type word64 struct {
v reflect.Value
}
func word64_Set(p word64, o *Buffer, x uint64) {
t := p.v.Type().Elem()
switch t {
case int64Type:
if len(o.int64s) == 0 {
o.int64s = make([]int64, uint64PoolSize)
}
o.int64s[0] = int64(x)
p.v.Set(reflect.ValueOf(&o.int64s[0]))
o.int64s = o.int64s[1:]
return
case uint64Type:
if len(o.uint64s) == 0 {
o.uint64s = make([]uint64, uint64PoolSize)
}
o.uint64s[0] = x
p.v.Set(reflect.ValueOf(&o.uint64s[0]))
o.uint64s = o.uint64s[1:]
return
case float64Type:
if len(o.float64s) == 0 {
o.float64s = make([]float64, uint64PoolSize)
}
o.float64s[0] = math.Float64frombits(x)
p.v.Set(reflect.ValueOf(&o.float64s[0]))
o.float64s = o.float64s[1:]
return
}
panic("unreachable")
}
func word64_IsNil(p word64) bool {
return p.v.IsNil()
}
func word64_Get(p word64) uint64 {
elem := p.v.Elem()
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return elem.Uint()
case reflect.Float64:
return math.Float64bits(elem.Float())
}
panic("unreachable")
}
func structPointer_Word64(p structPointer, f field) word64 {
return word64{structPointer_field(p, f)}
}
// word64Val is like word32Val but for 64-bit values.
type word64Val struct {
v reflect.Value
}
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
switch p.v.Type() {
case int64Type:
p.v.SetInt(int64(x))
return
case uint64Type:
p.v.SetUint(x)
return
case float64Type:
p.v.SetFloat(math.Float64frombits(x))
return
}
panic("unreachable")
}
func word64Val_Get(p word64Val) uint64 {
elem := p.v
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return elem.Uint()
case reflect.Float64:
return math.Float64bits(elem.Float())
}
panic("unreachable")
}
func structPointer_Word64Val(p structPointer, f field) word64Val {
return word64Val{structPointer_field(p, f)}
}
type word64Slice struct {
v reflect.Value
}
func (p word64Slice) Append(x uint64) {
n, m := p.v.Len(), p.v.Cap()
if n < m {
p.v.SetLen(n + 1)
} else {
t := p.v.Type().Elem()
p.v.Set(reflect.Append(p.v, reflect.Zero(t)))
}
elem := p.v.Index(n)
switch elem.Kind() {
case reflect.Int64:
elem.SetInt(int64(int64(x)))
case reflect.Uint64:
elem.SetUint(uint64(x))
case reflect.Float64:
elem.SetFloat(float64(math.Float64frombits(x)))
}
}
func (p word64Slice) Len() int {
return p.v.Len()
}
func (p word64Slice) Index(i int) uint64 {
elem := p.v.Index(i)
switch elem.Kind() {
case reflect.Int64:
return uint64(elem.Int())
case reflect.Uint64:
return uint64(elem.Uint())
case reflect.Float64:
return math.Float64bits(float64(elem.Float()))
}
panic("unreachable")
}
func structPointer_Word64Slice(p structPointer, f field) word64Slice {
return word64Slice{structPointer_field(p, f)}
}

View File

@ -0,0 +1,270 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// +build !appengine,!js
// This file contains the implementation of the proto field accesses using package unsafe.
package proto
import (
"reflect"
"unsafe"
)
// NOTE: These type_Foo functions would more idiomatically be methods,
// but Go does not allow methods on pointer types, and we must preserve
// some pointer type for the garbage collector. We use these
// funcs with clunky names as our poor approximation to methods.
//
// An alternative would be
// type structPointer struct { p unsafe.Pointer }
// but that does not registerize as well.
// A structPointer is a pointer to a struct.
type structPointer unsafe.Pointer
// toStructPointer returns a structPointer equivalent to the given reflect value.
func toStructPointer(v reflect.Value) structPointer {
return structPointer(unsafe.Pointer(v.Pointer()))
}
// IsNil reports whether p is nil.
func structPointer_IsNil(p structPointer) bool {
return p == nil
}
// Interface returns the struct pointer, assumed to have element type t,
// as an interface value.
func structPointer_Interface(p structPointer, t reflect.Type) interface{} {
return reflect.NewAt(t, unsafe.Pointer(p)).Interface()
}
// A field identifies a field in a struct, accessible from a structPointer.
// In this implementation, a field is identified by its byte offset from the start of the struct.
type field uintptr
// toField returns a field equivalent to the given reflect field.
func toField(f *reflect.StructField) field {
return field(f.Offset)
}
// invalidField is an invalid field identifier.
const invalidField = ^field(0)
// IsValid reports whether the field identifier is valid.
func (f field) IsValid() bool {
return f != ^field(0)
}
// Bytes returns the address of a []byte field in the struct.
func structPointer_Bytes(p structPointer, f field) *[]byte {
return (*[]byte)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BytesSlice returns the address of a [][]byte field in the struct.
func structPointer_BytesSlice(p structPointer, f field) *[][]byte {
return (*[][]byte)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// Bool returns the address of a *bool field in the struct.
func structPointer_Bool(p structPointer, f field) **bool {
return (**bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BoolVal returns the address of a bool field in the struct.
func structPointer_BoolVal(p structPointer, f field) *bool {
return (*bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// BoolSlice returns the address of a []bool field in the struct.
func structPointer_BoolSlice(p structPointer, f field) *[]bool {
return (*[]bool)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// String returns the address of a *string field in the struct.
func structPointer_String(p structPointer, f field) **string {
return (**string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StringVal returns the address of a string field in the struct.
func structPointer_StringVal(p structPointer, f field) *string {
return (*string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StringSlice returns the address of a []string field in the struct.
func structPointer_StringSlice(p structPointer, f field) *[]string {
return (*[]string)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// ExtMap returns the address of an extension map field in the struct.
func structPointer_Extensions(p structPointer, f field) *XXX_InternalExtensions {
return (*XXX_InternalExtensions)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
func structPointer_ExtMap(p structPointer, f field) *map[int32]Extension {
return (*map[int32]Extension)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// NewAt returns the reflect.Value for a pointer to a field in the struct.
func structPointer_NewAt(p structPointer, f field, typ reflect.Type) reflect.Value {
return reflect.NewAt(typ, unsafe.Pointer(uintptr(p)+uintptr(f)))
}
// SetStructPointer writes a *struct field in the struct.
func structPointer_SetStructPointer(p structPointer, f field, q structPointer) {
*(*structPointer)(unsafe.Pointer(uintptr(p) + uintptr(f))) = q
}
// GetStructPointer reads a *struct field in the struct.
func structPointer_GetStructPointer(p structPointer, f field) structPointer {
return *(*structPointer)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// StructPointerSlice the address of a []*struct field in the struct.
func structPointer_StructPointerSlice(p structPointer, f field) *structPointerSlice {
return (*structPointerSlice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// A structPointerSlice represents a slice of pointers to structs (themselves submessages or groups).
type structPointerSlice []structPointer
func (v *structPointerSlice) Len() int { return len(*v) }
func (v *structPointerSlice) Index(i int) structPointer { return (*v)[i] }
func (v *structPointerSlice) Append(p structPointer) { *v = append(*v, p) }
// A word32 is the address of a "pointer to 32-bit value" field.
type word32 **uint32
// IsNil reports whether *v is nil.
func word32_IsNil(p word32) bool {
return *p == nil
}
// Set sets *v to point at a newly allocated word set to x.
func word32_Set(p word32, o *Buffer, x uint32) {
if len(o.uint32s) == 0 {
o.uint32s = make([]uint32, uint32PoolSize)
}
o.uint32s[0] = x
*p = &o.uint32s[0]
o.uint32s = o.uint32s[1:]
}
// Get gets the value pointed at by *v.
func word32_Get(p word32) uint32 {
return **p
}
// Word32 returns the address of a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32(p structPointer, f field) word32 {
return word32((**uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// A word32Val is the address of a 32-bit value field.
type word32Val *uint32
// Set sets *p to x.
func word32Val_Set(p word32Val, x uint32) {
*p = x
}
// Get gets the value pointed at by p.
func word32Val_Get(p word32Val) uint32 {
return *p
}
// Word32Val returns the address of a *int32, *uint32, *float32, or *enum field in the struct.
func structPointer_Word32Val(p structPointer, f field) word32Val {
return word32Val((*uint32)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// A word32Slice is a slice of 32-bit values.
type word32Slice []uint32
func (v *word32Slice) Append(x uint32) { *v = append(*v, x) }
func (v *word32Slice) Len() int { return len(*v) }
func (v *word32Slice) Index(i int) uint32 { return (*v)[i] }
// Word32Slice returns the address of a []int32, []uint32, []float32, or []enum field in the struct.
func structPointer_Word32Slice(p structPointer, f field) *word32Slice {
return (*word32Slice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}
// word64 is like word32 but for 64-bit values.
type word64 **uint64
func word64_Set(p word64, o *Buffer, x uint64) {
if len(o.uint64s) == 0 {
o.uint64s = make([]uint64, uint64PoolSize)
}
o.uint64s[0] = x
*p = &o.uint64s[0]
o.uint64s = o.uint64s[1:]
}
func word64_IsNil(p word64) bool {
return *p == nil
}
func word64_Get(p word64) uint64 {
return **p
}
func structPointer_Word64(p structPointer, f field) word64 {
return word64((**uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// word64Val is like word32Val but for 64-bit values.
type word64Val *uint64
func word64Val_Set(p word64Val, o *Buffer, x uint64) {
*p = x
}
func word64Val_Get(p word64Val) uint64 {
return *p
}
func structPointer_Word64Val(p structPointer, f field) word64Val {
return word64Val((*uint64)(unsafe.Pointer(uintptr(p) + uintptr(f))))
}
// word64Slice is like word32Slice but for 64-bit values.
type word64Slice []uint64
func (v *word64Slice) Append(x uint64) { *v = append(*v, x) }
func (v *word64Slice) Len() int { return len(*v) }
func (v *word64Slice) Index(i int) uint64 { return (*v)[i] }
func structPointer_Word64Slice(p structPointer, f field) *word64Slice {
return (*word64Slice)(unsafe.Pointer(uintptr(p) + uintptr(f)))
}

872
vendor/github.com/golang/protobuf/proto/properties.go generated vendored Normal file
View File

@ -0,0 +1,872 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
/*
* Routines for encoding data into the wire format for protocol buffers.
*/
import (
"fmt"
"log"
"os"
"reflect"
"sort"
"strconv"
"strings"
"sync"
)
const debug bool = false
// Constants that identify the encoding of a value on the wire.
const (
WireVarint = 0
WireFixed64 = 1
WireBytes = 2
WireStartGroup = 3
WireEndGroup = 4
WireFixed32 = 5
)
const startSize = 10 // initial slice/string sizes
// Encoders are defined in encode.go
// An encoder outputs the full representation of a field, including its
// tag and encoder type.
type encoder func(p *Buffer, prop *Properties, base structPointer) error
// A valueEncoder encodes a single integer in a particular encoding.
type valueEncoder func(o *Buffer, x uint64) error
// Sizers are defined in encode.go
// A sizer returns the encoded size of a field, including its tag and encoder
// type.
type sizer func(prop *Properties, base structPointer) int
// A valueSizer returns the encoded size of a single integer in a particular
// encoding.
type valueSizer func(x uint64) int
// Decoders are defined in decode.go
// A decoder creates a value from its wire representation.
// Unrecognized subelements are saved in unrec.
type decoder func(p *Buffer, prop *Properties, base structPointer) error
// A valueDecoder decodes a single integer in a particular encoding.
type valueDecoder func(o *Buffer) (x uint64, err error)
// A oneofMarshaler does the marshaling for all oneof fields in a message.
type oneofMarshaler func(Message, *Buffer) error
// A oneofUnmarshaler does the unmarshaling for a oneof field in a message.
type oneofUnmarshaler func(Message, int, int, *Buffer) (bool, error)
// A oneofSizer does the sizing for all oneof fields in a message.
type oneofSizer func(Message) int
// tagMap is an optimization over map[int]int for typical protocol buffer
// use-cases. Encoded protocol buffers are often in tag order with small tag
// numbers.
type tagMap struct {
fastTags []int
slowTags map[int]int
}
// tagMapFastLimit is the upper bound on the tag number that will be stored in
// the tagMap slice rather than its map.
const tagMapFastLimit = 1024
func (p *tagMap) get(t int) (int, bool) {
if t > 0 && t < tagMapFastLimit {
if t >= len(p.fastTags) {
return 0, false
}
fi := p.fastTags[t]
return fi, fi >= 0
}
fi, ok := p.slowTags[t]
return fi, ok
}
func (p *tagMap) put(t int, fi int) {
if t > 0 && t < tagMapFastLimit {
for len(p.fastTags) < t+1 {
p.fastTags = append(p.fastTags, -1)
}
p.fastTags[t] = fi
return
}
if p.slowTags == nil {
p.slowTags = make(map[int]int)
}
p.slowTags[t] = fi
}
// StructProperties represents properties for all the fields of a struct.
// decoderTags and decoderOrigNames should only be used by the decoder.
type StructProperties struct {
Prop []*Properties // properties for each field
reqCount int // required count
decoderTags tagMap // map from proto tag to struct field number
decoderOrigNames map[string]int // map from original name to struct field number
order []int // list of struct field numbers in tag order
unrecField field // field id of the XXX_unrecognized []byte field
extendable bool // is this an extendable proto
oneofMarshaler oneofMarshaler
oneofUnmarshaler oneofUnmarshaler
oneofSizer oneofSizer
stype reflect.Type
// OneofTypes contains information about the oneof fields in this message.
// It is keyed by the original name of a field.
OneofTypes map[string]*OneofProperties
}
// OneofProperties represents information about a specific field in a oneof.
type OneofProperties struct {
Type reflect.Type // pointer to generated struct type for this oneof field
Field int // struct field number of the containing oneof in the message
Prop *Properties
}
// Implement the sorting interface so we can sort the fields in tag order, as recommended by the spec.
// See encode.go, (*Buffer).enc_struct.
func (sp *StructProperties) Len() int { return len(sp.order) }
func (sp *StructProperties) Less(i, j int) bool {
return sp.Prop[sp.order[i]].Tag < sp.Prop[sp.order[j]].Tag
}
func (sp *StructProperties) Swap(i, j int) { sp.order[i], sp.order[j] = sp.order[j], sp.order[i] }
// Properties represents the protocol-specific behavior of a single struct field.
type Properties struct {
Name string // name of the field, for error messages
OrigName string // original name before protocol compiler (always set)
JSONName string // name to use for JSON; determined by protoc
Wire string
WireType int
Tag int
Required bool
Optional bool
Repeated bool
Packed bool // relevant for repeated primitives only
Enum string // set for enum types only
proto3 bool // whether this is known to be a proto3 field; set for []byte only
oneof bool // whether this is a oneof field
Default string // default value
HasDefault bool // whether an explicit default was provided
def_uint64 uint64
enc encoder
valEnc valueEncoder // set for bool and numeric types only
field field
tagcode []byte // encoding of EncodeVarint((Tag<<3)|WireType)
tagbuf [8]byte
stype reflect.Type // set for struct types only
sprop *StructProperties // set for struct types only
isMarshaler bool
isUnmarshaler bool
mtype reflect.Type // set for map types only
mkeyprop *Properties // set for map types only
mvalprop *Properties // set for map types only
size sizer
valSize valueSizer // set for bool and numeric types only
dec decoder
valDec valueDecoder // set for bool and numeric types only
// If this is a packable field, this will be the decoder for the packed version of the field.
packedDec decoder
}
// String formats the properties in the protobuf struct field tag style.
func (p *Properties) String() string {
s := p.Wire
s = ","
s += strconv.Itoa(p.Tag)
if p.Required {
s += ",req"
}
if p.Optional {
s += ",opt"
}
if p.Repeated {
s += ",rep"
}
if p.Packed {
s += ",packed"
}
s += ",name=" + p.OrigName
if p.JSONName != p.OrigName {
s += ",json=" + p.JSONName
}
if p.proto3 {
s += ",proto3"
}
if p.oneof {
s += ",oneof"
}
if len(p.Enum) > 0 {
s += ",enum=" + p.Enum
}
if p.HasDefault {
s += ",def=" + p.Default
}
return s
}
// Parse populates p by parsing a string in the protobuf struct field tag style.
func (p *Properties) Parse(s string) {
// "bytes,49,opt,name=foo,def=hello!"
fields := strings.Split(s, ",") // breaks def=, but handled below.
if len(fields) < 2 {
fmt.Fprintf(os.Stderr, "proto: tag has too few fields: %q\n", s)
return
}
p.Wire = fields[0]
switch p.Wire {
case "varint":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeVarint
p.valDec = (*Buffer).DecodeVarint
p.valSize = sizeVarint
case "fixed32":
p.WireType = WireFixed32
p.valEnc = (*Buffer).EncodeFixed32
p.valDec = (*Buffer).DecodeFixed32
p.valSize = sizeFixed32
case "fixed64":
p.WireType = WireFixed64
p.valEnc = (*Buffer).EncodeFixed64
p.valDec = (*Buffer).DecodeFixed64
p.valSize = sizeFixed64
case "zigzag32":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeZigzag32
p.valDec = (*Buffer).DecodeZigzag32
p.valSize = sizeZigzag32
case "zigzag64":
p.WireType = WireVarint
p.valEnc = (*Buffer).EncodeZigzag64
p.valDec = (*Buffer).DecodeZigzag64
p.valSize = sizeZigzag64
case "bytes", "group":
p.WireType = WireBytes
// no numeric converter for non-numeric types
default:
fmt.Fprintf(os.Stderr, "proto: tag has unknown wire type: %q\n", s)
return
}
var err error
p.Tag, err = strconv.Atoi(fields[1])
if err != nil {
return
}
for i := 2; i < len(fields); i++ {
f := fields[i]
switch {
case f == "req":
p.Required = true
case f == "opt":
p.Optional = true
case f == "rep":
p.Repeated = true
case f == "packed":
p.Packed = true
case strings.HasPrefix(f, "name="):
p.OrigName = f[5:]
case strings.HasPrefix(f, "json="):
p.JSONName = f[5:]
case strings.HasPrefix(f, "enum="):
p.Enum = f[5:]
case f == "proto3":
p.proto3 = true
case f == "oneof":
p.oneof = true
case strings.HasPrefix(f, "def="):
p.HasDefault = true
p.Default = f[4:] // rest of string
if i+1 < len(fields) {
// Commas aren't escaped, and def is always last.
p.Default += "," + strings.Join(fields[i+1:], ",")
break
}
}
}
}
func logNoSliceEnc(t1, t2 reflect.Type) {
fmt.Fprintf(os.Stderr, "proto: no slice oenc for %T = []%T\n", t1, t2)
}
var protoMessageType = reflect.TypeOf((*Message)(nil)).Elem()
// Initialize the fields for encoding and decoding.
func (p *Properties) setEncAndDec(typ reflect.Type, f *reflect.StructField, lockGetProp bool) {
p.enc = nil
p.dec = nil
p.size = nil
switch t1 := typ; t1.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no coders for %v\n", t1)
// proto3 scalar types
case reflect.Bool:
p.enc = (*Buffer).enc_proto3_bool
p.dec = (*Buffer).dec_proto3_bool
p.size = size_proto3_bool
case reflect.Int32:
p.enc = (*Buffer).enc_proto3_int32
p.dec = (*Buffer).dec_proto3_int32
p.size = size_proto3_int32
case reflect.Uint32:
p.enc = (*Buffer).enc_proto3_uint32
p.dec = (*Buffer).dec_proto3_int32 // can reuse
p.size = size_proto3_uint32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_proto3_int64
p.dec = (*Buffer).dec_proto3_int64
p.size = size_proto3_int64
case reflect.Float32:
p.enc = (*Buffer).enc_proto3_uint32 // can just treat them as bits
p.dec = (*Buffer).dec_proto3_int32
p.size = size_proto3_uint32
case reflect.Float64:
p.enc = (*Buffer).enc_proto3_int64 // can just treat them as bits
p.dec = (*Buffer).dec_proto3_int64
p.size = size_proto3_int64
case reflect.String:
p.enc = (*Buffer).enc_proto3_string
p.dec = (*Buffer).dec_proto3_string
p.size = size_proto3_string
case reflect.Ptr:
switch t2 := t1.Elem(); t2.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no encoder function for %v -> %v\n", t1, t2)
break
case reflect.Bool:
p.enc = (*Buffer).enc_bool
p.dec = (*Buffer).dec_bool
p.size = size_bool
case reflect.Int32:
p.enc = (*Buffer).enc_int32
p.dec = (*Buffer).dec_int32
p.size = size_int32
case reflect.Uint32:
p.enc = (*Buffer).enc_uint32
p.dec = (*Buffer).dec_int32 // can reuse
p.size = size_uint32
case reflect.Int64, reflect.Uint64:
p.enc = (*Buffer).enc_int64
p.dec = (*Buffer).dec_int64
p.size = size_int64
case reflect.Float32:
p.enc = (*Buffer).enc_uint32 // can just treat them as bits
p.dec = (*Buffer).dec_int32
p.size = size_uint32
case reflect.Float64:
p.enc = (*Buffer).enc_int64 // can just treat them as bits
p.dec = (*Buffer).dec_int64
p.size = size_int64
case reflect.String:
p.enc = (*Buffer).enc_string
p.dec = (*Buffer).dec_string
p.size = size_string
case reflect.Struct:
p.stype = t1.Elem()
p.isMarshaler = isMarshaler(t1)
p.isUnmarshaler = isUnmarshaler(t1)
if p.Wire == "bytes" {
p.enc = (*Buffer).enc_struct_message
p.dec = (*Buffer).dec_struct_message
p.size = size_struct_message
} else {
p.enc = (*Buffer).enc_struct_group
p.dec = (*Buffer).dec_struct_group
p.size = size_struct_group
}
}
case reflect.Slice:
switch t2 := t1.Elem(); t2.Kind() {
default:
logNoSliceEnc(t1, t2)
break
case reflect.Bool:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_bool
p.size = size_slice_packed_bool
} else {
p.enc = (*Buffer).enc_slice_bool
p.size = size_slice_bool
}
p.dec = (*Buffer).dec_slice_bool
p.packedDec = (*Buffer).dec_slice_packed_bool
case reflect.Int32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int32
p.size = size_slice_packed_int32
} else {
p.enc = (*Buffer).enc_slice_int32
p.size = size_slice_int32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case reflect.Uint32:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_uint32
p.size = size_slice_packed_uint32
} else {
p.enc = (*Buffer).enc_slice_uint32
p.size = size_slice_uint32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case reflect.Int64, reflect.Uint64:
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int64
p.size = size_slice_packed_int64
} else {
p.enc = (*Buffer).enc_slice_int64
p.size = size_slice_int64
}
p.dec = (*Buffer).dec_slice_int64
p.packedDec = (*Buffer).dec_slice_packed_int64
case reflect.Uint8:
p.dec = (*Buffer).dec_slice_byte
if p.proto3 {
p.enc = (*Buffer).enc_proto3_slice_byte
p.size = size_proto3_slice_byte
} else {
p.enc = (*Buffer).enc_slice_byte
p.size = size_slice_byte
}
case reflect.Float32, reflect.Float64:
switch t2.Bits() {
case 32:
// can just treat them as bits
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_uint32
p.size = size_slice_packed_uint32
} else {
p.enc = (*Buffer).enc_slice_uint32
p.size = size_slice_uint32
}
p.dec = (*Buffer).dec_slice_int32
p.packedDec = (*Buffer).dec_slice_packed_int32
case 64:
// can just treat them as bits
if p.Packed {
p.enc = (*Buffer).enc_slice_packed_int64
p.size = size_slice_packed_int64
} else {
p.enc = (*Buffer).enc_slice_int64
p.size = size_slice_int64
}
p.dec = (*Buffer).dec_slice_int64
p.packedDec = (*Buffer).dec_slice_packed_int64
default:
logNoSliceEnc(t1, t2)
break
}
case reflect.String:
p.enc = (*Buffer).enc_slice_string
p.dec = (*Buffer).dec_slice_string
p.size = size_slice_string
case reflect.Ptr:
switch t3 := t2.Elem(); t3.Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no ptr oenc for %T -> %T -> %T\n", t1, t2, t3)
break
case reflect.Struct:
p.stype = t2.Elem()
p.isMarshaler = isMarshaler(t2)
p.isUnmarshaler = isUnmarshaler(t2)
if p.Wire == "bytes" {
p.enc = (*Buffer).enc_slice_struct_message
p.dec = (*Buffer).dec_slice_struct_message
p.size = size_slice_struct_message
} else {
p.enc = (*Buffer).enc_slice_struct_group
p.dec = (*Buffer).dec_slice_struct_group
p.size = size_slice_struct_group
}
}
case reflect.Slice:
switch t2.Elem().Kind() {
default:
fmt.Fprintf(os.Stderr, "proto: no slice elem oenc for %T -> %T -> %T\n", t1, t2, t2.Elem())
break
case reflect.Uint8:
p.enc = (*Buffer).enc_slice_slice_byte
p.dec = (*Buffer).dec_slice_slice_byte
p.size = size_slice_slice_byte
}
}
case reflect.Map:
p.enc = (*Buffer).enc_new_map
p.dec = (*Buffer).dec_new_map
p.size = size_new_map
p.mtype = t1
p.mkeyprop = &Properties{}
p.mkeyprop.init(reflect.PtrTo(p.mtype.Key()), "Key", f.Tag.Get("protobuf_key"), nil, lockGetProp)
p.mvalprop = &Properties{}
vtype := p.mtype.Elem()
if vtype.Kind() != reflect.Ptr && vtype.Kind() != reflect.Slice {
// The value type is not a message (*T) or bytes ([]byte),
// so we need encoders for the pointer to this type.
vtype = reflect.PtrTo(vtype)
}
p.mvalprop.init(vtype, "Value", f.Tag.Get("protobuf_val"), nil, lockGetProp)
}
// precalculate tag code
wire := p.WireType
if p.Packed {
wire = WireBytes
}
x := uint32(p.Tag)<<3 | uint32(wire)
i := 0
for i = 0; x > 127; i++ {
p.tagbuf[i] = 0x80 | uint8(x&0x7F)
x >>= 7
}
p.tagbuf[i] = uint8(x)
p.tagcode = p.tagbuf[0 : i+1]
if p.stype != nil {
if lockGetProp {
p.sprop = GetProperties(p.stype)
} else {
p.sprop = getPropertiesLocked(p.stype)
}
}
}
var (
marshalerType = reflect.TypeOf((*Marshaler)(nil)).Elem()
unmarshalerType = reflect.TypeOf((*Unmarshaler)(nil)).Elem()
)
// isMarshaler reports whether type t implements Marshaler.
func isMarshaler(t reflect.Type) bool {
// We're checking for (likely) pointer-receiver methods
// so if t is not a pointer, something is very wrong.
// The calls above only invoke isMarshaler on pointer types.
if t.Kind() != reflect.Ptr {
panic("proto: misuse of isMarshaler")
}
return t.Implements(marshalerType)
}
// isUnmarshaler reports whether type t implements Unmarshaler.
func isUnmarshaler(t reflect.Type) bool {
// We're checking for (likely) pointer-receiver methods
// so if t is not a pointer, something is very wrong.
// The calls above only invoke isUnmarshaler on pointer types.
if t.Kind() != reflect.Ptr {
panic("proto: misuse of isUnmarshaler")
}
return t.Implements(unmarshalerType)
}
// Init populates the properties from a protocol buffer struct tag.
func (p *Properties) Init(typ reflect.Type, name, tag string, f *reflect.StructField) {
p.init(typ, name, tag, f, true)
}
func (p *Properties) init(typ reflect.Type, name, tag string, f *reflect.StructField, lockGetProp bool) {
// "bytes,49,opt,def=hello!"
p.Name = name
p.OrigName = name
if f != nil {
p.field = toField(f)
}
if tag == "" {
return
}
p.Parse(tag)
p.setEncAndDec(typ, f, lockGetProp)
}
var (
propertiesMu sync.RWMutex
propertiesMap = make(map[reflect.Type]*StructProperties)
)
// GetProperties returns the list of properties for the type represented by t.
// t must represent a generated struct type of a protocol message.
func GetProperties(t reflect.Type) *StructProperties {
if t.Kind() != reflect.Struct {
panic("proto: type must have kind struct")
}
// Most calls to GetProperties in a long-running program will be
// retrieving details for types we have seen before.
propertiesMu.RLock()
sprop, ok := propertiesMap[t]
propertiesMu.RUnlock()
if ok {
if collectStats {
stats.Chit++
}
return sprop
}
propertiesMu.Lock()
sprop = getPropertiesLocked(t)
propertiesMu.Unlock()
return sprop
}
// getPropertiesLocked requires that propertiesMu is held.
func getPropertiesLocked(t reflect.Type) *StructProperties {
if prop, ok := propertiesMap[t]; ok {
if collectStats {
stats.Chit++
}
return prop
}
if collectStats {
stats.Cmiss++
}
prop := new(StructProperties)
// in case of recursive protos, fill this in now.
propertiesMap[t] = prop
// build properties
prop.extendable = reflect.PtrTo(t).Implements(extendableProtoType) ||
reflect.PtrTo(t).Implements(extendableProtoV1Type)
prop.unrecField = invalidField
prop.Prop = make([]*Properties, t.NumField())
prop.order = make([]int, t.NumField())
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
p := new(Properties)
name := f.Name
p.init(f.Type, name, f.Tag.Get("protobuf"), &f, false)
if f.Name == "XXX_InternalExtensions" { // special case
p.enc = (*Buffer).enc_exts
p.dec = nil // not needed
p.size = size_exts
} else if f.Name == "XXX_extensions" { // special case
p.enc = (*Buffer).enc_map
p.dec = nil // not needed
p.size = size_map
} else if f.Name == "XXX_unrecognized" { // special case
prop.unrecField = toField(&f)
}
oneof := f.Tag.Get("protobuf_oneof") // special case
if oneof != "" {
// Oneof fields don't use the traditional protobuf tag.
p.OrigName = oneof
}
prop.Prop[i] = p
prop.order[i] = i
if debug {
print(i, " ", f.Name, " ", t.String(), " ")
if p.Tag > 0 {
print(p.String())
}
print("\n")
}
if p.enc == nil && !strings.HasPrefix(f.Name, "XXX_") && oneof == "" {
fmt.Fprintln(os.Stderr, "proto: no encoder for", f.Name, f.Type.String(), "[GetProperties]")
}
}
// Re-order prop.order.
sort.Sort(prop)
type oneofMessage interface {
XXX_OneofFuncs() (func(Message, *Buffer) error, func(Message, int, int, *Buffer) (bool, error), func(Message) int, []interface{})
}
if om, ok := reflect.Zero(reflect.PtrTo(t)).Interface().(oneofMessage); ok {
var oots []interface{}
prop.oneofMarshaler, prop.oneofUnmarshaler, prop.oneofSizer, oots = om.XXX_OneofFuncs()
prop.stype = t
// Interpret oneof metadata.
prop.OneofTypes = make(map[string]*OneofProperties)
for _, oot := range oots {
oop := &OneofProperties{
Type: reflect.ValueOf(oot).Type(), // *T
Prop: new(Properties),
}
sft := oop.Type.Elem().Field(0)
oop.Prop.Name = sft.Name
oop.Prop.Parse(sft.Tag.Get("protobuf"))
// There will be exactly one interface field that
// this new value is assignable to.
for i := 0; i < t.NumField(); i++ {
f := t.Field(i)
if f.Type.Kind() != reflect.Interface {
continue
}
if !oop.Type.AssignableTo(f.Type) {
continue
}
oop.Field = i
break
}
prop.OneofTypes[oop.Prop.OrigName] = oop
}
}
// build required counts
// build tags
reqCount := 0
prop.decoderOrigNames = make(map[string]int)
for i, p := range prop.Prop {
if strings.HasPrefix(p.Name, "XXX_") {
// Internal fields should not appear in tags/origNames maps.
// They are handled specially when encoding and decoding.
continue
}
if p.Required {
reqCount++
}
prop.decoderTags.put(p.Tag, i)
prop.decoderOrigNames[p.OrigName] = i
}
prop.reqCount = reqCount
return prop
}
// Return the Properties object for the x[0]'th field of the structure.
func propByIndex(t reflect.Type, x []int) *Properties {
if len(x) != 1 {
fmt.Fprintf(os.Stderr, "proto: field index dimension %d (not 1) for type %s\n", len(x), t)
return nil
}
prop := GetProperties(t)
return prop.Prop[x[0]]
}
// Get the address and type of a pointer to a struct from an interface.
func getbase(pb Message) (t reflect.Type, b structPointer, err error) {
if pb == nil {
err = ErrNil
return
}
// get the reflect type of the pointer to the struct.
t = reflect.TypeOf(pb)
// get the address of the struct.
value := reflect.ValueOf(pb)
b = toStructPointer(value)
return
}
// A global registry of enum types.
// The generated code will register the generated maps by calling RegisterEnum.
var enumValueMaps = make(map[string]map[string]int32)
// RegisterEnum is called from the generated code to install the enum descriptor
// maps into the global table to aid parsing text format protocol buffers.
func RegisterEnum(typeName string, unusedNameMap map[int32]string, valueMap map[string]int32) {
if _, ok := enumValueMaps[typeName]; ok {
panic("proto: duplicate enum registered: " + typeName)
}
enumValueMaps[typeName] = valueMap
}
// EnumValueMap returns the mapping from names to integers of the
// enum type enumType, or a nil if not found.
func EnumValueMap(enumType string) map[string]int32 {
return enumValueMaps[enumType]
}
// A registry of all linked message types.
// The string is a fully-qualified proto name ("pkg.Message").
var (
protoTypes = make(map[string]reflect.Type)
revProtoTypes = make(map[reflect.Type]string)
)
// RegisterType is called from generated code and maps from the fully qualified
// proto name to the type (pointer to struct) of the protocol buffer.
func RegisterType(x Message, name string) {
if _, ok := protoTypes[name]; ok {
// TODO: Some day, make this a panic.
log.Printf("proto: duplicate proto type registered: %s", name)
return
}
t := reflect.TypeOf(x)
protoTypes[name] = t
revProtoTypes[t] = name
}
// MessageName returns the fully-qualified proto name for the given message type.
func MessageName(x Message) string {
type xname interface {
XXX_MessageName() string
}
if m, ok := x.(xname); ok {
return m.XXX_MessageName()
}
return revProtoTypes[reflect.TypeOf(x)]
}
// MessageType returns the message type (pointer to struct) for a named message.
func MessageType(name string) reflect.Type { return protoTypes[name] }
// A registry of all linked proto files.
var (
protoFiles = make(map[string][]byte) // file name => fileDescriptor
)
// RegisterFile is called from generated code and maps from the
// full file name of a .proto file to its compressed FileDescriptorProto.
func RegisterFile(filename string, fileDescriptor []byte) {
protoFiles[filename] = fileDescriptor
}
// FileDescriptor returns the compressed FileDescriptorProto for a .proto file.
func FileDescriptor(filename string) []byte { return protoFiles[filename] }

View File

@ -0,0 +1,347 @@
// Code generated by protoc-gen-go.
// source: proto3_proto/proto3.proto
// DO NOT EDIT!
/*
Package proto3_proto is a generated protocol buffer package.
It is generated from these files:
proto3_proto/proto3.proto
It has these top-level messages:
Message
Nested
MessageWithMap
IntMap
IntMaps
*/
package proto3_proto
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/ptypes/any"
import testdata "github.com/golang/protobuf/proto/testdata"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
type Message_Humour int32
const (
Message_UNKNOWN Message_Humour = 0
Message_PUNS Message_Humour = 1
Message_SLAPSTICK Message_Humour = 2
Message_BILL_BAILEY Message_Humour = 3
)
var Message_Humour_name = map[int32]string{
0: "UNKNOWN",
1: "PUNS",
2: "SLAPSTICK",
3: "BILL_BAILEY",
}
var Message_Humour_value = map[string]int32{
"UNKNOWN": 0,
"PUNS": 1,
"SLAPSTICK": 2,
"BILL_BAILEY": 3,
}
func (x Message_Humour) String() string {
return proto.EnumName(Message_Humour_name, int32(x))
}
func (Message_Humour) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0, 0} }
type Message struct {
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
Hilarity Message_Humour `protobuf:"varint,2,opt,name=hilarity,enum=proto3_proto.Message_Humour" json:"hilarity,omitempty"`
HeightInCm uint32 `protobuf:"varint,3,opt,name=height_in_cm,json=heightInCm" json:"height_in_cm,omitempty"`
Data []byte `protobuf:"bytes,4,opt,name=data,proto3" json:"data,omitempty"`
ResultCount int64 `protobuf:"varint,7,opt,name=result_count,json=resultCount" json:"result_count,omitempty"`
TrueScotsman bool `protobuf:"varint,8,opt,name=true_scotsman,json=trueScotsman" json:"true_scotsman,omitempty"`
Score float32 `protobuf:"fixed32,9,opt,name=score" json:"score,omitempty"`
Key []uint64 `protobuf:"varint,5,rep,packed,name=key" json:"key,omitempty"`
ShortKey []int32 `protobuf:"varint,19,rep,packed,name=short_key,json=shortKey" json:"short_key,omitempty"`
Nested *Nested `protobuf:"bytes,6,opt,name=nested" json:"nested,omitempty"`
RFunny []Message_Humour `protobuf:"varint,16,rep,packed,name=r_funny,json=rFunny,enum=proto3_proto.Message_Humour" json:"r_funny,omitempty"`
Terrain map[string]*Nested `protobuf:"bytes,10,rep,name=terrain" json:"terrain,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Proto2Field *testdata.SubDefaults `protobuf:"bytes,11,opt,name=proto2_field,json=proto2Field" json:"proto2_field,omitempty"`
Proto2Value map[string]*testdata.SubDefaults `protobuf:"bytes,13,rep,name=proto2_value,json=proto2Value" json:"proto2_value,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
Anything *google_protobuf.Any `protobuf:"bytes,14,opt,name=anything" json:"anything,omitempty"`
ManyThings []*google_protobuf.Any `protobuf:"bytes,15,rep,name=many_things,json=manyThings" json:"many_things,omitempty"`
Submessage *Message `protobuf:"bytes,17,opt,name=submessage" json:"submessage,omitempty"`
Children []*Message `protobuf:"bytes,18,rep,name=children" json:"children,omitempty"`
}
func (m *Message) Reset() { *m = Message{} }
func (m *Message) String() string { return proto.CompactTextString(m) }
func (*Message) ProtoMessage() {}
func (*Message) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Message) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *Message) GetHilarity() Message_Humour {
if m != nil {
return m.Hilarity
}
return Message_UNKNOWN
}
func (m *Message) GetHeightInCm() uint32 {
if m != nil {
return m.HeightInCm
}
return 0
}
func (m *Message) GetData() []byte {
if m != nil {
return m.Data
}
return nil
}
func (m *Message) GetResultCount() int64 {
if m != nil {
return m.ResultCount
}
return 0
}
func (m *Message) GetTrueScotsman() bool {
if m != nil {
return m.TrueScotsman
}
return false
}
func (m *Message) GetScore() float32 {
if m != nil {
return m.Score
}
return 0
}
func (m *Message) GetKey() []uint64 {
if m != nil {
return m.Key
}
return nil
}
func (m *Message) GetShortKey() []int32 {
if m != nil {
return m.ShortKey
}
return nil
}
func (m *Message) GetNested() *Nested {
if m != nil {
return m.Nested
}
return nil
}
func (m *Message) GetRFunny() []Message_Humour {
if m != nil {
return m.RFunny
}
return nil
}
func (m *Message) GetTerrain() map[string]*Nested {
if m != nil {
return m.Terrain
}
return nil
}
func (m *Message) GetProto2Field() *testdata.SubDefaults {
if m != nil {
return m.Proto2Field
}
return nil
}
func (m *Message) GetProto2Value() map[string]*testdata.SubDefaults {
if m != nil {
return m.Proto2Value
}
return nil
}
func (m *Message) GetAnything() *google_protobuf.Any {
if m != nil {
return m.Anything
}
return nil
}
func (m *Message) GetManyThings() []*google_protobuf.Any {
if m != nil {
return m.ManyThings
}
return nil
}
func (m *Message) GetSubmessage() *Message {
if m != nil {
return m.Submessage
}
return nil
}
func (m *Message) GetChildren() []*Message {
if m != nil {
return m.Children
}
return nil
}
type Nested struct {
Bunny string `protobuf:"bytes,1,opt,name=bunny" json:"bunny,omitempty"`
Cute bool `protobuf:"varint,2,opt,name=cute" json:"cute,omitempty"`
}
func (m *Nested) Reset() { *m = Nested{} }
func (m *Nested) String() string { return proto.CompactTextString(m) }
func (*Nested) ProtoMessage() {}
func (*Nested) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *Nested) GetBunny() string {
if m != nil {
return m.Bunny
}
return ""
}
func (m *Nested) GetCute() bool {
if m != nil {
return m.Cute
}
return false
}
type MessageWithMap struct {
ByteMapping map[bool][]byte `protobuf:"bytes,1,rep,name=byte_mapping,json=byteMapping" json:"byte_mapping,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value,proto3"`
}
func (m *MessageWithMap) Reset() { *m = MessageWithMap{} }
func (m *MessageWithMap) String() string { return proto.CompactTextString(m) }
func (*MessageWithMap) ProtoMessage() {}
func (*MessageWithMap) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *MessageWithMap) GetByteMapping() map[bool][]byte {
if m != nil {
return m.ByteMapping
}
return nil
}
type IntMap struct {
Rtt map[int32]int32 `protobuf:"bytes,1,rep,name=rtt" json:"rtt,omitempty" protobuf_key:"varint,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
}
func (m *IntMap) Reset() { *m = IntMap{} }
func (m *IntMap) String() string { return proto.CompactTextString(m) }
func (*IntMap) ProtoMessage() {}
func (*IntMap) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *IntMap) GetRtt() map[int32]int32 {
if m != nil {
return m.Rtt
}
return nil
}
type IntMaps struct {
Maps []*IntMap `protobuf:"bytes,1,rep,name=maps" json:"maps,omitempty"`
}
func (m *IntMaps) Reset() { *m = IntMaps{} }
func (m *IntMaps) String() string { return proto.CompactTextString(m) }
func (*IntMaps) ProtoMessage() {}
func (*IntMaps) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *IntMaps) GetMaps() []*IntMap {
if m != nil {
return m.Maps
}
return nil
}
func init() {
proto.RegisterType((*Message)(nil), "proto3_proto.Message")
proto.RegisterType((*Nested)(nil), "proto3_proto.Nested")
proto.RegisterType((*MessageWithMap)(nil), "proto3_proto.MessageWithMap")
proto.RegisterType((*IntMap)(nil), "proto3_proto.IntMap")
proto.RegisterType((*IntMaps)(nil), "proto3_proto.IntMaps")
proto.RegisterEnum("proto3_proto.Message_Humour", Message_Humour_name, Message_Humour_value)
}
func init() { proto.RegisterFile("proto3_proto/proto3.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 733 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x09, 0x6e, 0x88, 0x02, 0xff, 0x84, 0x53, 0x6d, 0x6f, 0xf3, 0x34,
0x14, 0x25, 0x4d, 0x5f, 0xd2, 0x9b, 0x74, 0x0b, 0x5e, 0x91, 0xbc, 0x02, 0x52, 0x28, 0x12, 0x8a,
0x78, 0x49, 0xa1, 0xd3, 0xd0, 0x84, 0x10, 0x68, 0x1b, 0x9b, 0xa8, 0xd6, 0x95, 0xca, 0xdd, 0x98,
0xf8, 0x14, 0xa5, 0xad, 0xdb, 0x46, 0x34, 0x4e, 0x49, 0x1c, 0xa4, 0xfc, 0x1d, 0xfe, 0x28, 0x8f,
0x6c, 0xa7, 0x5d, 0x36, 0x65, 0xcf, 0xf3, 0x29, 0xf6, 0xf1, 0xb9, 0xf7, 0x9c, 0x1c, 0x5f, 0xc3,
0xe9, 0x2e, 0x89, 0x79, 0x7c, 0xe6, 0xcb, 0xcf, 0x40, 0x6d, 0x3c, 0xf9, 0x41, 0x56, 0xf9, 0xa8,
0x77, 0xba, 0x8e, 0xe3, 0xf5, 0x96, 0x2a, 0xca, 0x3c, 0x5b, 0x0d, 0x02, 0x96, 0x2b, 0x62, 0xef,
0x84, 0xd3, 0x94, 0x2f, 0x03, 0x1e, 0x0c, 0xc4, 0x42, 0x81, 0xfd, 0xff, 0x5b, 0xd0, 0xba, 0xa7,
0x69, 0x1a, 0xac, 0x29, 0x42, 0x50, 0x67, 0x41, 0x44, 0xb1, 0xe6, 0x68, 0x6e, 0x9b, 0xc8, 0x35,
0xba, 0x00, 0x63, 0x13, 0x6e, 0x83, 0x24, 0xe4, 0x39, 0xae, 0x39, 0x9a, 0x7b, 0x34, 0xfc, 0xcc,
0x2b, 0x0b, 0x7a, 0x45, 0xb1, 0xf7, 0x7b, 0x16, 0xc5, 0x59, 0x42, 0x0e, 0x6c, 0xe4, 0x80, 0xb5,
0xa1, 0xe1, 0x7a, 0xc3, 0xfd, 0x90, 0xf9, 0x8b, 0x08, 0xeb, 0x8e, 0xe6, 0x76, 0x08, 0x28, 0x6c,
0xc4, 0xae, 0x23, 0xa1, 0x27, 0xec, 0xe0, 0xba, 0xa3, 0xb9, 0x16, 0x91, 0x6b, 0xf4, 0x05, 0x58,
0x09, 0x4d, 0xb3, 0x2d, 0xf7, 0x17, 0x71, 0xc6, 0x38, 0x6e, 0x39, 0x9a, 0xab, 0x13, 0x53, 0x61,
0xd7, 0x02, 0x42, 0x5f, 0x42, 0x87, 0x27, 0x19, 0xf5, 0xd3, 0x45, 0xcc, 0xd3, 0x28, 0x60, 0xd8,
0x70, 0x34, 0xd7, 0x20, 0x96, 0x00, 0x67, 0x05, 0x86, 0xba, 0xd0, 0x48, 0x17, 0x71, 0x42, 0x71,
0xdb, 0xd1, 0xdc, 0x1a, 0x51, 0x1b, 0x64, 0x83, 0xfe, 0x37, 0xcd, 0x71, 0xc3, 0xd1, 0xdd, 0x3a,
0x11, 0x4b, 0xf4, 0x29, 0xb4, 0xd3, 0x4d, 0x9c, 0x70, 0x5f, 0xe0, 0x27, 0x8e, 0xee, 0x36, 0x88,
0x21, 0x81, 0x3b, 0x9a, 0xa3, 0x6f, 0xa1, 0xc9, 0x68, 0xca, 0xe9, 0x12, 0x37, 0x1d, 0xcd, 0x35,
0x87, 0xdd, 0x97, 0xbf, 0x3e, 0x91, 0x67, 0xa4, 0xe0, 0xa0, 0x73, 0x68, 0x25, 0xfe, 0x2a, 0x63,
0x2c, 0xc7, 0xb6, 0xa3, 0x7f, 0x30, 0xa9, 0x66, 0x72, 0x2b, 0xb8, 0xe8, 0x67, 0x68, 0x71, 0x9a,
0x24, 0x41, 0xc8, 0x30, 0x38, 0xba, 0x6b, 0x0e, 0xfb, 0xd5, 0x65, 0x0f, 0x8a, 0x74, 0xc3, 0x78,
0x92, 0x93, 0x7d, 0x09, 0xba, 0x00, 0x75, 0xff, 0x43, 0x7f, 0x15, 0xd2, 0xed, 0x12, 0x9b, 0xd2,
0xe8, 0x27, 0xde, 0xfe, 0xae, 0xbd, 0x59, 0x36, 0xff, 0x8d, 0xae, 0x82, 0x6c, 0xcb, 0x53, 0x62,
0x2a, 0xea, 0xad, 0x60, 0xa2, 0xd1, 0xa1, 0xf2, 0xdf, 0x60, 0x9b, 0x51, 0xdc, 0x91, 0xe2, 0x5f,
0x55, 0x8b, 0x4f, 0x25, 0xf3, 0x4f, 0x41, 0x54, 0x06, 0x8a, 0x56, 0x12, 0x41, 0xdf, 0x83, 0x11,
0xb0, 0x9c, 0x6f, 0x42, 0xb6, 0xc6, 0x47, 0x45, 0x52, 0x6a, 0x0e, 0xbd, 0xfd, 0x1c, 0x7a, 0x97,
0x2c, 0x27, 0x07, 0x16, 0x3a, 0x07, 0x33, 0x0a, 0x58, 0xee, 0xcb, 0x5d, 0x8a, 0x8f, 0xa5, 0x76,
0x75, 0x11, 0x08, 0xe2, 0x83, 0xe4, 0xa1, 0x73, 0x80, 0x34, 0x9b, 0x47, 0xca, 0x14, 0xfe, 0xb8,
0xf8, 0xd7, 0x2a, 0xc7, 0xa4, 0x44, 0x44, 0x3f, 0x80, 0xb1, 0xd8, 0x84, 0xdb, 0x65, 0x42, 0x19,
0x46, 0x52, 0xea, 0x8d, 0xa2, 0x03, 0xad, 0x37, 0x05, 0xab, 0x1c, 0xf8, 0x7e, 0x72, 0xd4, 0xd3,
0x90, 0x93, 0xf3, 0x35, 0x34, 0x54, 0x70, 0xb5, 0xf7, 0xcc, 0x86, 0xa2, 0xfc, 0x54, 0xbb, 0xd0,
0x7a, 0x8f, 0x60, 0xbf, 0x4e, 0xb1, 0xa2, 0xeb, 0x37, 0x2f, 0xbb, 0xbe, 0x71, 0x91, 0xcf, 0x6d,
0xfb, 0xbf, 0x42, 0x53, 0x0d, 0x14, 0x32, 0xa1, 0xf5, 0x38, 0xb9, 0x9b, 0xfc, 0xf1, 0x34, 0xb1,
0x3f, 0x42, 0x06, 0xd4, 0xa7, 0x8f, 0x93, 0x99, 0xad, 0xa1, 0x0e, 0xb4, 0x67, 0xe3, 0xcb, 0xe9,
0xec, 0x61, 0x74, 0x7d, 0x67, 0xd7, 0xd0, 0x31, 0x98, 0x57, 0xa3, 0xf1, 0xd8, 0xbf, 0xba, 0x1c,
0x8d, 0x6f, 0xfe, 0xb2, 0xf5, 0xfe, 0x10, 0x9a, 0xca, 0xac, 0x78, 0x33, 0x73, 0x39, 0xbe, 0xca,
0x8f, 0xda, 0x88, 0x57, 0xba, 0xc8, 0xb8, 0x32, 0x64, 0x10, 0xb9, 0xee, 0xff, 0xa7, 0xc1, 0x51,
0x91, 0xd9, 0x53, 0xc8, 0x37, 0xf7, 0xc1, 0x0e, 0x4d, 0xc1, 0x9a, 0xe7, 0x9c, 0xfa, 0x51, 0xb0,
0xdb, 0x89, 0x39, 0xd0, 0x64, 0xce, 0xdf, 0x55, 0xe6, 0x5c, 0xd4, 0x78, 0x57, 0x39, 0xa7, 0xf7,
0x8a, 0x5f, 0x4c, 0xd5, 0xfc, 0x19, 0xe9, 0xfd, 0x02, 0xf6, 0x6b, 0x42, 0x39, 0x30, 0x43, 0x05,
0xd6, 0x2d, 0x07, 0x66, 0x95, 0x93, 0xf9, 0x07, 0x9a, 0x23, 0xc6, 0x85, 0xb7, 0x01, 0xe8, 0x09,
0xe7, 0x85, 0xa5, 0xcf, 0x5f, 0x5a, 0x52, 0x14, 0x8f, 0x70, 0xae, 0x2c, 0x08, 0x66, 0xef, 0x47,
0x30, 0xf6, 0x40, 0x59, 0xb2, 0x51, 0x21, 0xd9, 0x28, 0x4b, 0x9e, 0x41, 0x4b, 0xf5, 0x4b, 0x91,
0x0b, 0xf5, 0x28, 0xd8, 0xa5, 0x85, 0x68, 0xb7, 0x4a, 0x94, 0x48, 0xc6, 0xbc, 0xa9, 0x8e, 0xde,
0x05, 0x00, 0x00, 0xff, 0xff, 0x75, 0x38, 0xad, 0x84, 0xe4, 0x05, 0x00, 0x00,
}

View File

@ -0,0 +1,87 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2014 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax = "proto3";
import "google/protobuf/any.proto";
import "testdata/test.proto";
package proto3_proto;
message Message {
enum Humour {
UNKNOWN = 0;
PUNS = 1;
SLAPSTICK = 2;
BILL_BAILEY = 3;
}
string name = 1;
Humour hilarity = 2;
uint32 height_in_cm = 3;
bytes data = 4;
int64 result_count = 7;
bool true_scotsman = 8;
float score = 9;
repeated uint64 key = 5;
repeated int32 short_key = 19;
Nested nested = 6;
repeated Humour r_funny = 16;
map<string, Nested> terrain = 10;
testdata.SubDefaults proto2_field = 11;
map<string, testdata.SubDefaults> proto2_value = 13;
google.protobuf.Any anything = 14;
repeated google.protobuf.Any many_things = 15;
Message submessage = 17;
repeated Message children = 18;
}
message Nested {
string bunny = 1;
bool cute = 2;
}
message MessageWithMap {
map<bool, bytes> byte_mapping = 1;
}
message IntMap {
map<int32, int32> rtt = 1;
}
message IntMaps {
repeated IntMap maps = 1;
}

135
vendor/github.com/golang/protobuf/proto/proto3_test.go generated vendored Normal file
View File

@ -0,0 +1,135 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2014 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto_test
import (
"testing"
"github.com/golang/protobuf/proto"
pb "github.com/golang/protobuf/proto/proto3_proto"
tpb "github.com/golang/protobuf/proto/testdata"
)
func TestProto3ZeroValues(t *testing.T) {
tests := []struct {
desc string
m proto.Message
}{
{"zero message", &pb.Message{}},
{"empty bytes field", &pb.Message{Data: []byte{}}},
}
for _, test := range tests {
b, err := proto.Marshal(test.m)
if err != nil {
t.Errorf("%s: proto.Marshal: %v", test.desc, err)
continue
}
if len(b) > 0 {
t.Errorf("%s: Encoding is non-empty: %q", test.desc, b)
}
}
}
func TestRoundTripProto3(t *testing.T) {
m := &pb.Message{
Name: "David", // (2 | 1<<3): 0x0a 0x05 "David"
Hilarity: pb.Message_PUNS, // (0 | 2<<3): 0x10 0x01
HeightInCm: 178, // (0 | 3<<3): 0x18 0xb2 0x01
Data: []byte("roboto"), // (2 | 4<<3): 0x20 0x06 "roboto"
ResultCount: 47, // (0 | 7<<3): 0x38 0x2f
TrueScotsman: true, // (0 | 8<<3): 0x40 0x01
Score: 8.1, // (5 | 9<<3): 0x4d <8.1>
Key: []uint64{1, 0xdeadbeef},
Nested: &pb.Nested{
Bunny: "Monty",
},
}
t.Logf(" m: %v", m)
b, err := proto.Marshal(m)
if err != nil {
t.Fatalf("proto.Marshal: %v", err)
}
t.Logf(" b: %q", b)
m2 := new(pb.Message)
if err := proto.Unmarshal(b, m2); err != nil {
t.Fatalf("proto.Unmarshal: %v", err)
}
t.Logf("m2: %v", m2)
if !proto.Equal(m, m2) {
t.Errorf("proto.Equal returned false:\n m: %v\nm2: %v", m, m2)
}
}
func TestGettersForBasicTypesExist(t *testing.T) {
var m pb.Message
if got := m.GetNested().GetBunny(); got != "" {
t.Errorf("m.GetNested().GetBunny() = %q, want empty string", got)
}
if got := m.GetNested().GetCute(); got {
t.Errorf("m.GetNested().GetCute() = %t, want false", got)
}
}
func TestProto3SetDefaults(t *testing.T) {
in := &pb.Message{
Terrain: map[string]*pb.Nested{
"meadow": new(pb.Nested),
},
Proto2Field: new(tpb.SubDefaults),
Proto2Value: map[string]*tpb.SubDefaults{
"badlands": new(tpb.SubDefaults),
},
}
got := proto.Clone(in).(*pb.Message)
proto.SetDefaults(got)
// There are no defaults in proto3. Everything should be the zero value, but
// we need to remember to set defaults for nested proto2 messages.
want := &pb.Message{
Terrain: map[string]*pb.Nested{
"meadow": new(pb.Nested),
},
Proto2Field: &tpb.SubDefaults{N: proto.Int64(7)},
Proto2Value: map[string]*tpb.SubDefaults{
"badlands": &tpb.SubDefaults{N: proto.Int64(7)},
},
}
if !proto.Equal(got, want) {
t.Errorf("with in = %v\nproto.SetDefaults(in) =>\ngot %v\nwant %v", in, got, want)
}
}

63
vendor/github.com/golang/protobuf/proto/size2_test.go generated vendored Normal file
View File

@ -0,0 +1,63 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
import (
"testing"
)
// This is a separate file and package from size_test.go because that one uses
// generated messages and thus may not be in package proto without having a circular
// dependency, whereas this file tests unexported details of size.go.
func TestVarintSize(t *testing.T) {
// Check the edge cases carefully.
testCases := []struct {
n uint64
size int
}{
{0, 1},
{1, 1},
{127, 1},
{128, 2},
{16383, 2},
{16384, 3},
{1<<63 - 1, 9},
{1 << 63, 10},
}
for _, tc := range testCases {
size := sizeVarint(tc.n)
if size != tc.size {
t.Errorf("sizeVarint(%d) = %d, want %d", tc.n, size, tc.size)
}
}
}

164
vendor/github.com/golang/protobuf/proto/size_test.go generated vendored Normal file
View File

@ -0,0 +1,164 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto_test
import (
"log"
"strings"
"testing"
. "github.com/golang/protobuf/proto"
proto3pb "github.com/golang/protobuf/proto/proto3_proto"
pb "github.com/golang/protobuf/proto/testdata"
)
var messageWithExtension1 = &pb.MyMessage{Count: Int32(7)}
// messageWithExtension2 is in equal_test.go.
var messageWithExtension3 = &pb.MyMessage{Count: Int32(8)}
func init() {
if err := SetExtension(messageWithExtension1, pb.E_Ext_More, &pb.Ext{Data: String("Abbott")}); err != nil {
log.Panicf("SetExtension: %v", err)
}
if err := SetExtension(messageWithExtension3, pb.E_Ext_More, &pb.Ext{Data: String("Costello")}); err != nil {
log.Panicf("SetExtension: %v", err)
}
// Force messageWithExtension3 to have the extension encoded.
Marshal(messageWithExtension3)
}
var SizeTests = []struct {
desc string
pb Message
}{
{"empty", &pb.OtherMessage{}},
// Basic types.
{"bool", &pb.Defaults{F_Bool: Bool(true)}},
{"int32", &pb.Defaults{F_Int32: Int32(12)}},
{"negative int32", &pb.Defaults{F_Int32: Int32(-1)}},
{"small int64", &pb.Defaults{F_Int64: Int64(1)}},
{"big int64", &pb.Defaults{F_Int64: Int64(1 << 20)}},
{"negative int64", &pb.Defaults{F_Int64: Int64(-1)}},
{"fixed32", &pb.Defaults{F_Fixed32: Uint32(71)}},
{"fixed64", &pb.Defaults{F_Fixed64: Uint64(72)}},
{"uint32", &pb.Defaults{F_Uint32: Uint32(123)}},
{"uint64", &pb.Defaults{F_Uint64: Uint64(124)}},
{"float", &pb.Defaults{F_Float: Float32(12.6)}},
{"double", &pb.Defaults{F_Double: Float64(13.9)}},
{"string", &pb.Defaults{F_String: String("niles")}},
{"bytes", &pb.Defaults{F_Bytes: []byte("wowsa")}},
{"bytes, empty", &pb.Defaults{F_Bytes: []byte{}}},
{"sint32", &pb.Defaults{F_Sint32: Int32(65)}},
{"sint64", &pb.Defaults{F_Sint64: Int64(67)}},
{"enum", &pb.Defaults{F_Enum: pb.Defaults_BLUE.Enum()}},
// Repeated.
{"empty repeated bool", &pb.MoreRepeated{Bools: []bool{}}},
{"repeated bool", &pb.MoreRepeated{Bools: []bool{false, true, true, false}}},
{"packed repeated bool", &pb.MoreRepeated{BoolsPacked: []bool{false, true, true, false, true, true, true}}},
{"repeated int32", &pb.MoreRepeated{Ints: []int32{1, 12203, 1729, -1}}},
{"repeated int32 packed", &pb.MoreRepeated{IntsPacked: []int32{1, 12203, 1729}}},
{"repeated int64 packed", &pb.MoreRepeated{Int64SPacked: []int64{
// Need enough large numbers to verify that the header is counting the number of bytes
// for the field, not the number of elements.
1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62,
1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62, 1 << 62,
}}},
{"repeated string", &pb.MoreRepeated{Strings: []string{"r", "ken", "gri"}}},
{"repeated fixed", &pb.MoreRepeated{Fixeds: []uint32{1, 2, 3, 4}}},
// Nested.
{"nested", &pb.OldMessage{Nested: &pb.OldMessage_Nested{Name: String("whatever")}}},
{"group", &pb.GroupOld{G: &pb.GroupOld_G{X: Int32(12345)}}},
// Other things.
{"unrecognized", &pb.MoreRepeated{XXX_unrecognized: []byte{13<<3 | 0, 4}}},
{"extension (unencoded)", messageWithExtension1},
{"extension (encoded)", messageWithExtension3},
// proto3 message
{"proto3 empty", &proto3pb.Message{}},
{"proto3 bool", &proto3pb.Message{TrueScotsman: true}},
{"proto3 int64", &proto3pb.Message{ResultCount: 1}},
{"proto3 uint32", &proto3pb.Message{HeightInCm: 123}},
{"proto3 float", &proto3pb.Message{Score: 12.6}},
{"proto3 string", &proto3pb.Message{Name: "Snezana"}},
{"proto3 bytes", &proto3pb.Message{Data: []byte("wowsa")}},
{"proto3 bytes, empty", &proto3pb.Message{Data: []byte{}}},
{"proto3 enum", &proto3pb.Message{Hilarity: proto3pb.Message_PUNS}},
{"proto3 map field with empty bytes", &proto3pb.MessageWithMap{ByteMapping: map[bool][]byte{false: []byte{}}}},
{"map field", &pb.MessageWithMap{NameMapping: map[int32]string{1: "Rob", 7: "Andrew"}}},
{"map field with message", &pb.MessageWithMap{MsgMapping: map[int64]*pb.FloatingPoint{0x7001: &pb.FloatingPoint{F: Float64(2.0)}}}},
{"map field with bytes", &pb.MessageWithMap{ByteMapping: map[bool][]byte{true: []byte("this time for sure")}}},
{"map field with empty bytes", &pb.MessageWithMap{ByteMapping: map[bool][]byte{true: []byte{}}}},
{"map field with big entry", &pb.MessageWithMap{NameMapping: map[int32]string{8: strings.Repeat("x", 125)}}},
{"map field with big key and val", &pb.MessageWithMap{StrToStr: map[string]string{strings.Repeat("x", 70): strings.Repeat("y", 70)}}},
{"map field with big numeric key", &pb.MessageWithMap{NameMapping: map[int32]string{0xf00d: "om nom nom"}}},
{"oneof not set", &pb.Oneof{}},
{"oneof bool", &pb.Oneof{Union: &pb.Oneof_F_Bool{true}}},
{"oneof zero int32", &pb.Oneof{Union: &pb.Oneof_F_Int32{0}}},
{"oneof big int32", &pb.Oneof{Union: &pb.Oneof_F_Int32{1 << 20}}},
{"oneof int64", &pb.Oneof{Union: &pb.Oneof_F_Int64{42}}},
{"oneof fixed32", &pb.Oneof{Union: &pb.Oneof_F_Fixed32{43}}},
{"oneof fixed64", &pb.Oneof{Union: &pb.Oneof_F_Fixed64{44}}},
{"oneof uint32", &pb.Oneof{Union: &pb.Oneof_F_Uint32{45}}},
{"oneof uint64", &pb.Oneof{Union: &pb.Oneof_F_Uint64{46}}},
{"oneof float", &pb.Oneof{Union: &pb.Oneof_F_Float{47.1}}},
{"oneof double", &pb.Oneof{Union: &pb.Oneof_F_Double{48.9}}},
{"oneof string", &pb.Oneof{Union: &pb.Oneof_F_String{"Rhythmic Fman"}}},
{"oneof bytes", &pb.Oneof{Union: &pb.Oneof_F_Bytes{[]byte("let go")}}},
{"oneof sint32", &pb.Oneof{Union: &pb.Oneof_F_Sint32{50}}},
{"oneof sint64", &pb.Oneof{Union: &pb.Oneof_F_Sint64{51}}},
{"oneof enum", &pb.Oneof{Union: &pb.Oneof_F_Enum{pb.MyMessage_BLUE}}},
{"message for oneof", &pb.GoTestField{Label: String("k"), Type: String("v")}},
{"oneof message", &pb.Oneof{Union: &pb.Oneof_F_Message{&pb.GoTestField{Label: String("k"), Type: String("v")}}}},
{"oneof group", &pb.Oneof{Union: &pb.Oneof_FGroup{&pb.Oneof_F_Group{X: Int32(52)}}}},
{"oneof largest tag", &pb.Oneof{Union: &pb.Oneof_F_Largest_Tag{1}}},
{"multiple oneofs", &pb.Oneof{Union: &pb.Oneof_F_Int32{1}, Tormato: &pb.Oneof_Value{2}}},
}
func TestSize(t *testing.T) {
for _, tc := range SizeTests {
size := Size(tc.pb)
b, err := Marshal(tc.pb)
if err != nil {
t.Errorf("%v: Marshal failed: %v", tc.desc, err)
continue
}
if size != len(b) {
t.Errorf("%v: Size(%v) = %d, want %d", tc.desc, tc.pb, size, len(b))
t.Logf("%v: bytes: %#v", tc.desc, b)
}
}
}

View File

@ -0,0 +1,50 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
include ../../Make.protobuf
all: regenerate
regenerate:
rm -f test.pb.go
make test.pb.go
# The following rules are just aids to development. Not needed for typical testing.
diff: regenerate
git diff test.pb.go
restore:
cp test.pb.go.golden test.pb.go
preserve:
cp test.pb.go test.pb.go.golden

View File

@ -0,0 +1,86 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2012 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Verify that the compiler output for test.proto is unchanged.
package testdata
import (
"crypto/sha1"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"testing"
)
// sum returns in string form (for easy comparison) the SHA-1 hash of the named file.
func sum(t *testing.T, name string) string {
data, err := ioutil.ReadFile(name)
if err != nil {
t.Fatal(err)
}
t.Logf("sum(%q): length is %d", name, len(data))
hash := sha1.New()
_, err = hash.Write(data)
if err != nil {
t.Fatal(err)
}
return fmt.Sprintf("% x", hash.Sum(nil))
}
func run(t *testing.T, name string, args ...string) {
cmd := exec.Command(name, args...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Run()
if err != nil {
t.Fatal(err)
}
}
func TestGolden(t *testing.T) {
// Compute the original checksum.
goldenSum := sum(t, "test.pb.go")
// Run the proto compiler.
run(t, "protoc", "--go_out="+os.TempDir(), "test.proto")
newFile := filepath.Join(os.TempDir(), "test.pb.go")
defer os.Remove(newFile)
// Compute the new checksum.
newSum := sum(t, newFile)
// Verify
if newSum != goldenSum {
run(t, "diff", "-u", "test.pb.go", newFile)
t.Fatal("Code generated by protoc-gen-go has changed; update test.pb.go")
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,548 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// A feature-rich test file for the protocol compiler and libraries.
syntax = "proto2";
package testdata;
enum FOO { FOO1 = 1; };
message GoEnum {
required FOO foo = 1;
}
message GoTestField {
required string Label = 1;
required string Type = 2;
}
message GoTest {
// An enum, for completeness.
enum KIND {
VOID = 0;
// Basic types
BOOL = 1;
BYTES = 2;
FINGERPRINT = 3;
FLOAT = 4;
INT = 5;
STRING = 6;
TIME = 7;
// Groupings
TUPLE = 8;
ARRAY = 9;
MAP = 10;
// Table types
TABLE = 11;
// Functions
FUNCTION = 12; // last tag
};
// Some typical parameters
required KIND Kind = 1;
optional string Table = 2;
optional int32 Param = 3;
// Required, repeated and optional foreign fields.
required GoTestField RequiredField = 4;
repeated GoTestField RepeatedField = 5;
optional GoTestField OptionalField = 6;
// Required fields of all basic types
required bool F_Bool_required = 10;
required int32 F_Int32_required = 11;
required int64 F_Int64_required = 12;
required fixed32 F_Fixed32_required = 13;
required fixed64 F_Fixed64_required = 14;
required uint32 F_Uint32_required = 15;
required uint64 F_Uint64_required = 16;
required float F_Float_required = 17;
required double F_Double_required = 18;
required string F_String_required = 19;
required bytes F_Bytes_required = 101;
required sint32 F_Sint32_required = 102;
required sint64 F_Sint64_required = 103;
// Repeated fields of all basic types
repeated bool F_Bool_repeated = 20;
repeated int32 F_Int32_repeated = 21;
repeated int64 F_Int64_repeated = 22;
repeated fixed32 F_Fixed32_repeated = 23;
repeated fixed64 F_Fixed64_repeated = 24;
repeated uint32 F_Uint32_repeated = 25;
repeated uint64 F_Uint64_repeated = 26;
repeated float F_Float_repeated = 27;
repeated double F_Double_repeated = 28;
repeated string F_String_repeated = 29;
repeated bytes F_Bytes_repeated = 201;
repeated sint32 F_Sint32_repeated = 202;
repeated sint64 F_Sint64_repeated = 203;
// Optional fields of all basic types
optional bool F_Bool_optional = 30;
optional int32 F_Int32_optional = 31;
optional int64 F_Int64_optional = 32;
optional fixed32 F_Fixed32_optional = 33;
optional fixed64 F_Fixed64_optional = 34;
optional uint32 F_Uint32_optional = 35;
optional uint64 F_Uint64_optional = 36;
optional float F_Float_optional = 37;
optional double F_Double_optional = 38;
optional string F_String_optional = 39;
optional bytes F_Bytes_optional = 301;
optional sint32 F_Sint32_optional = 302;
optional sint64 F_Sint64_optional = 303;
// Default-valued fields of all basic types
optional bool F_Bool_defaulted = 40 [default=true];
optional int32 F_Int32_defaulted = 41 [default=32];
optional int64 F_Int64_defaulted = 42 [default=64];
optional fixed32 F_Fixed32_defaulted = 43 [default=320];
optional fixed64 F_Fixed64_defaulted = 44 [default=640];
optional uint32 F_Uint32_defaulted = 45 [default=3200];
optional uint64 F_Uint64_defaulted = 46 [default=6400];
optional float F_Float_defaulted = 47 [default=314159.];
optional double F_Double_defaulted = 48 [default=271828.];
optional string F_String_defaulted = 49 [default="hello, \"world!\"\n"];
optional bytes F_Bytes_defaulted = 401 [default="Bignose"];
optional sint32 F_Sint32_defaulted = 402 [default = -32];
optional sint64 F_Sint64_defaulted = 403 [default = -64];
// Packed repeated fields (no string or bytes).
repeated bool F_Bool_repeated_packed = 50 [packed=true];
repeated int32 F_Int32_repeated_packed = 51 [packed=true];
repeated int64 F_Int64_repeated_packed = 52 [packed=true];
repeated fixed32 F_Fixed32_repeated_packed = 53 [packed=true];
repeated fixed64 F_Fixed64_repeated_packed = 54 [packed=true];
repeated uint32 F_Uint32_repeated_packed = 55 [packed=true];
repeated uint64 F_Uint64_repeated_packed = 56 [packed=true];
repeated float F_Float_repeated_packed = 57 [packed=true];
repeated double F_Double_repeated_packed = 58 [packed=true];
repeated sint32 F_Sint32_repeated_packed = 502 [packed=true];
repeated sint64 F_Sint64_repeated_packed = 503 [packed=true];
// Required, repeated, and optional groups.
required group RequiredGroup = 70 {
required string RequiredField = 71;
};
repeated group RepeatedGroup = 80 {
required string RequiredField = 81;
};
optional group OptionalGroup = 90 {
required string RequiredField = 91;
};
}
// For testing a group containing a required field.
message GoTestRequiredGroupField {
required group Group = 1 {
required int32 Field = 2;
};
}
// For testing skipping of unrecognized fields.
// Numbers are all big, larger than tag numbers in GoTestField,
// the message used in the corresponding test.
message GoSkipTest {
required int32 skip_int32 = 11;
required fixed32 skip_fixed32 = 12;
required fixed64 skip_fixed64 = 13;
required string skip_string = 14;
required group SkipGroup = 15 {
required int32 group_int32 = 16;
required string group_string = 17;
}
}
// For testing packed/non-packed decoder switching.
// A serialized instance of one should be deserializable as the other.
message NonPackedTest {
repeated int32 a = 1;
}
message PackedTest {
repeated int32 b = 1 [packed=true];
}
message MaxTag {
// Maximum possible tag number.
optional string last_field = 536870911;
}
message OldMessage {
message Nested {
optional string name = 1;
}
optional Nested nested = 1;
optional int32 num = 2;
}
// NewMessage is wire compatible with OldMessage;
// imagine it as a future version.
message NewMessage {
message Nested {
optional string name = 1;
optional string food_group = 2;
}
optional Nested nested = 1;
// This is an int32 in OldMessage.
optional int64 num = 2;
}
// Smaller tests for ASCII formatting.
message InnerMessage {
required string host = 1;
optional int32 port = 2 [default=4000];
optional bool connected = 3;
}
message OtherMessage {
optional int64 key = 1;
optional bytes value = 2;
optional float weight = 3;
optional InnerMessage inner = 4;
extensions 100 to max;
}
message RequiredInnerMessage {
required InnerMessage leo_finally_won_an_oscar = 1;
}
message MyMessage {
required int32 count = 1;
optional string name = 2;
optional string quote = 3;
repeated string pet = 4;
optional InnerMessage inner = 5;
repeated OtherMessage others = 6;
optional RequiredInnerMessage we_must_go_deeper = 13;
repeated InnerMessage rep_inner = 12;
enum Color {
RED = 0;
GREEN = 1;
BLUE = 2;
};
optional Color bikeshed = 7;
optional group SomeGroup = 8 {
optional int32 group_field = 9;
}
// This field becomes [][]byte in the generated code.
repeated bytes rep_bytes = 10;
optional double bigfloat = 11;
extensions 100 to max;
}
message Ext {
extend MyMessage {
optional Ext more = 103;
optional string text = 104;
optional int32 number = 105;
}
optional string data = 1;
}
extend MyMessage {
repeated string greeting = 106;
}
message ComplexExtension {
optional int32 first = 1;
optional int32 second = 2;
repeated int32 third = 3;
}
extend OtherMessage {
optional ComplexExtension complex = 200;
repeated ComplexExtension r_complex = 201;
}
message DefaultsMessage {
enum DefaultsEnum {
ZERO = 0;
ONE = 1;
TWO = 2;
};
extensions 100 to max;
}
extend DefaultsMessage {
optional double no_default_double = 101;
optional float no_default_float = 102;
optional int32 no_default_int32 = 103;
optional int64 no_default_int64 = 104;
optional uint32 no_default_uint32 = 105;
optional uint64 no_default_uint64 = 106;
optional sint32 no_default_sint32 = 107;
optional sint64 no_default_sint64 = 108;
optional fixed32 no_default_fixed32 = 109;
optional fixed64 no_default_fixed64 = 110;
optional sfixed32 no_default_sfixed32 = 111;
optional sfixed64 no_default_sfixed64 = 112;
optional bool no_default_bool = 113;
optional string no_default_string = 114;
optional bytes no_default_bytes = 115;
optional DefaultsMessage.DefaultsEnum no_default_enum = 116;
optional double default_double = 201 [default = 3.1415];
optional float default_float = 202 [default = 3.14];
optional int32 default_int32 = 203 [default = 42];
optional int64 default_int64 = 204 [default = 43];
optional uint32 default_uint32 = 205 [default = 44];
optional uint64 default_uint64 = 206 [default = 45];
optional sint32 default_sint32 = 207 [default = 46];
optional sint64 default_sint64 = 208 [default = 47];
optional fixed32 default_fixed32 = 209 [default = 48];
optional fixed64 default_fixed64 = 210 [default = 49];
optional sfixed32 default_sfixed32 = 211 [default = 50];
optional sfixed64 default_sfixed64 = 212 [default = 51];
optional bool default_bool = 213 [default = true];
optional string default_string = 214 [default = "Hello, string"];
optional bytes default_bytes = 215 [default = "Hello, bytes"];
optional DefaultsMessage.DefaultsEnum default_enum = 216 [default = ONE];
}
message MyMessageSet {
option message_set_wire_format = true;
extensions 100 to max;
}
message Empty {
}
extend MyMessageSet {
optional Empty x201 = 201;
optional Empty x202 = 202;
optional Empty x203 = 203;
optional Empty x204 = 204;
optional Empty x205 = 205;
optional Empty x206 = 206;
optional Empty x207 = 207;
optional Empty x208 = 208;
optional Empty x209 = 209;
optional Empty x210 = 210;
optional Empty x211 = 211;
optional Empty x212 = 212;
optional Empty x213 = 213;
optional Empty x214 = 214;
optional Empty x215 = 215;
optional Empty x216 = 216;
optional Empty x217 = 217;
optional Empty x218 = 218;
optional Empty x219 = 219;
optional Empty x220 = 220;
optional Empty x221 = 221;
optional Empty x222 = 222;
optional Empty x223 = 223;
optional Empty x224 = 224;
optional Empty x225 = 225;
optional Empty x226 = 226;
optional Empty x227 = 227;
optional Empty x228 = 228;
optional Empty x229 = 229;
optional Empty x230 = 230;
optional Empty x231 = 231;
optional Empty x232 = 232;
optional Empty x233 = 233;
optional Empty x234 = 234;
optional Empty x235 = 235;
optional Empty x236 = 236;
optional Empty x237 = 237;
optional Empty x238 = 238;
optional Empty x239 = 239;
optional Empty x240 = 240;
optional Empty x241 = 241;
optional Empty x242 = 242;
optional Empty x243 = 243;
optional Empty x244 = 244;
optional Empty x245 = 245;
optional Empty x246 = 246;
optional Empty x247 = 247;
optional Empty x248 = 248;
optional Empty x249 = 249;
optional Empty x250 = 250;
}
message MessageList {
repeated group Message = 1 {
required string name = 2;
required int32 count = 3;
}
}
message Strings {
optional string string_field = 1;
optional bytes bytes_field = 2;
}
message Defaults {
enum Color {
RED = 0;
GREEN = 1;
BLUE = 2;
}
// Default-valued fields of all basic types.
// Same as GoTest, but copied here to make testing easier.
optional bool F_Bool = 1 [default=true];
optional int32 F_Int32 = 2 [default=32];
optional int64 F_Int64 = 3 [default=64];
optional fixed32 F_Fixed32 = 4 [default=320];
optional fixed64 F_Fixed64 = 5 [default=640];
optional uint32 F_Uint32 = 6 [default=3200];
optional uint64 F_Uint64 = 7 [default=6400];
optional float F_Float = 8 [default=314159.];
optional double F_Double = 9 [default=271828.];
optional string F_String = 10 [default="hello, \"world!\"\n"];
optional bytes F_Bytes = 11 [default="Bignose"];
optional sint32 F_Sint32 = 12 [default=-32];
optional sint64 F_Sint64 = 13 [default=-64];
optional Color F_Enum = 14 [default=GREEN];
// More fields with crazy defaults.
optional float F_Pinf = 15 [default=inf];
optional float F_Ninf = 16 [default=-inf];
optional float F_Nan = 17 [default=nan];
// Sub-message.
optional SubDefaults sub = 18;
// Redundant but explicit defaults.
optional string str_zero = 19 [default=""];
}
message SubDefaults {
optional int64 n = 1 [default=7];
}
message RepeatedEnum {
enum Color {
RED = 1;
}
repeated Color color = 1;
}
message MoreRepeated {
repeated bool bools = 1;
repeated bool bools_packed = 2 [packed=true];
repeated int32 ints = 3;
repeated int32 ints_packed = 4 [packed=true];
repeated int64 int64s_packed = 7 [packed=true];
repeated string strings = 5;
repeated fixed32 fixeds = 6;
}
// GroupOld and GroupNew have the same wire format.
// GroupNew has a new field inside a group.
message GroupOld {
optional group G = 101 {
optional int32 x = 2;
}
}
message GroupNew {
optional group G = 101 {
optional int32 x = 2;
optional int32 y = 3;
}
}
message FloatingPoint {
required double f = 1;
optional bool exact = 2;
}
message MessageWithMap {
map<int32, string> name_mapping = 1;
map<sint64, FloatingPoint> msg_mapping = 2;
map<bool, bytes> byte_mapping = 3;
map<string, string> str_to_str = 4;
}
message Oneof {
oneof union {
bool F_Bool = 1;
int32 F_Int32 = 2;
int64 F_Int64 = 3;
fixed32 F_Fixed32 = 4;
fixed64 F_Fixed64 = 5;
uint32 F_Uint32 = 6;
uint64 F_Uint64 = 7;
float F_Float = 8;
double F_Double = 9;
string F_String = 10;
bytes F_Bytes = 11;
sint32 F_Sint32 = 12;
sint64 F_Sint64 = 13;
MyMessage.Color F_Enum = 14;
GoTestField F_Message = 15;
group F_Group = 16 {
optional int32 x = 17;
}
int32 F_Largest_Tag = 536870911;
}
oneof tormato {
int32 value = 100;
}
}
message Communique {
optional bool make_me_cry = 1;
// This is a oneof, called "union".
oneof union {
int32 number = 5;
string name = 6;
bytes data = 7;
double temp_c = 8;
MyMessage.Color col = 9;
Strings msg = 10;
}
}

854
vendor/github.com/golang/protobuf/proto/text.go generated vendored Normal file
View File

@ -0,0 +1,854 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
// Functions for writing the text protocol buffer format.
import (
"bufio"
"bytes"
"encoding"
"errors"
"fmt"
"io"
"log"
"math"
"reflect"
"sort"
"strings"
)
var (
newline = []byte("\n")
spaces = []byte(" ")
gtNewline = []byte(">\n")
endBraceNewline = []byte("}\n")
backslashN = []byte{'\\', 'n'}
backslashR = []byte{'\\', 'r'}
backslashT = []byte{'\\', 't'}
backslashDQ = []byte{'\\', '"'}
backslashBS = []byte{'\\', '\\'}
posInf = []byte("inf")
negInf = []byte("-inf")
nan = []byte("nan")
)
type writer interface {
io.Writer
WriteByte(byte) error
}
// textWriter is an io.Writer that tracks its indentation level.
type textWriter struct {
ind int
complete bool // if the current position is a complete line
compact bool // whether to write out as a one-liner
w writer
}
func (w *textWriter) WriteString(s string) (n int, err error) {
if !strings.Contains(s, "\n") {
if !w.compact && w.complete {
w.writeIndent()
}
w.complete = false
return io.WriteString(w.w, s)
}
// WriteString is typically called without newlines, so this
// codepath and its copy are rare. We copy to avoid
// duplicating all of Write's logic here.
return w.Write([]byte(s))
}
func (w *textWriter) Write(p []byte) (n int, err error) {
newlines := bytes.Count(p, newline)
if newlines == 0 {
if !w.compact && w.complete {
w.writeIndent()
}
n, err = w.w.Write(p)
w.complete = false
return n, err
}
frags := bytes.SplitN(p, newline, newlines+1)
if w.compact {
for i, frag := range frags {
if i > 0 {
if err := w.w.WriteByte(' '); err != nil {
return n, err
}
n++
}
nn, err := w.w.Write(frag)
n += nn
if err != nil {
return n, err
}
}
return n, nil
}
for i, frag := range frags {
if w.complete {
w.writeIndent()
}
nn, err := w.w.Write(frag)
n += nn
if err != nil {
return n, err
}
if i+1 < len(frags) {
if err := w.w.WriteByte('\n'); err != nil {
return n, err
}
n++
}
}
w.complete = len(frags[len(frags)-1]) == 0
return n, nil
}
func (w *textWriter) WriteByte(c byte) error {
if w.compact && c == '\n' {
c = ' '
}
if !w.compact && w.complete {
w.writeIndent()
}
err := w.w.WriteByte(c)
w.complete = c == '\n'
return err
}
func (w *textWriter) indent() { w.ind++ }
func (w *textWriter) unindent() {
if w.ind == 0 {
log.Print("proto: textWriter unindented too far")
return
}
w.ind--
}
func writeName(w *textWriter, props *Properties) error {
if _, err := w.WriteString(props.OrigName); err != nil {
return err
}
if props.Wire != "group" {
return w.WriteByte(':')
}
return nil
}
// raw is the interface satisfied by RawMessage.
type raw interface {
Bytes() []byte
}
func requiresQuotes(u string) bool {
// When type URL contains any characters except [0-9A-Za-z./\-]*, it must be quoted.
for _, ch := range u {
switch {
case ch == '.' || ch == '/' || ch == '_':
continue
case '0' <= ch && ch <= '9':
continue
case 'A' <= ch && ch <= 'Z':
continue
case 'a' <= ch && ch <= 'z':
continue
default:
return true
}
}
return false
}
// isAny reports whether sv is a google.protobuf.Any message
func isAny(sv reflect.Value) bool {
type wkt interface {
XXX_WellKnownType() string
}
t, ok := sv.Addr().Interface().(wkt)
return ok && t.XXX_WellKnownType() == "Any"
}
// writeProto3Any writes an expanded google.protobuf.Any message.
//
// It returns (false, nil) if sv value can't be unmarshaled (e.g. because
// required messages are not linked in).
//
// It returns (true, error) when sv was written in expanded format or an error
// was encountered.
func (tm *TextMarshaler) writeProto3Any(w *textWriter, sv reflect.Value) (bool, error) {
turl := sv.FieldByName("TypeUrl")
val := sv.FieldByName("Value")
if !turl.IsValid() || !val.IsValid() {
return true, errors.New("proto: invalid google.protobuf.Any message")
}
b, ok := val.Interface().([]byte)
if !ok {
return true, errors.New("proto: invalid google.protobuf.Any message")
}
parts := strings.Split(turl.String(), "/")
mt := MessageType(parts[len(parts)-1])
if mt == nil {
return false, nil
}
m := reflect.New(mt.Elem())
if err := Unmarshal(b, m.Interface().(Message)); err != nil {
return false, nil
}
w.Write([]byte("["))
u := turl.String()
if requiresQuotes(u) {
writeString(w, u)
} else {
w.Write([]byte(u))
}
if w.compact {
w.Write([]byte("]:<"))
} else {
w.Write([]byte("]: <\n"))
w.ind++
}
if err := tm.writeStruct(w, m.Elem()); err != nil {
return true, err
}
if w.compact {
w.Write([]byte("> "))
} else {
w.ind--
w.Write([]byte(">\n"))
}
return true, nil
}
func (tm *TextMarshaler) writeStruct(w *textWriter, sv reflect.Value) error {
if tm.ExpandAny && isAny(sv) {
if canExpand, err := tm.writeProto3Any(w, sv); canExpand {
return err
}
}
st := sv.Type()
sprops := GetProperties(st)
for i := 0; i < sv.NumField(); i++ {
fv := sv.Field(i)
props := sprops.Prop[i]
name := st.Field(i).Name
if strings.HasPrefix(name, "XXX_") {
// There are two XXX_ fields:
// XXX_unrecognized []byte
// XXX_extensions map[int32]proto.Extension
// The first is handled here;
// the second is handled at the bottom of this function.
if name == "XXX_unrecognized" && !fv.IsNil() {
if err := writeUnknownStruct(w, fv.Interface().([]byte)); err != nil {
return err
}
}
continue
}
if fv.Kind() == reflect.Ptr && fv.IsNil() {
// Field not filled in. This could be an optional field or
// a required field that wasn't filled in. Either way, there
// isn't anything we can show for it.
continue
}
if fv.Kind() == reflect.Slice && fv.IsNil() {
// Repeated field that is empty, or a bytes field that is unused.
continue
}
if props.Repeated && fv.Kind() == reflect.Slice {
// Repeated field.
for j := 0; j < fv.Len(); j++ {
if err := writeName(w, props); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
v := fv.Index(j)
if v.Kind() == reflect.Ptr && v.IsNil() {
// A nil message in a repeated field is not valid,
// but we can handle that more gracefully than panicking.
if _, err := w.Write([]byte("<nil>\n")); err != nil {
return err
}
continue
}
if err := tm.writeAny(w, v, props); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
continue
}
if fv.Kind() == reflect.Map {
// Map fields are rendered as a repeated struct with key/value fields.
keys := fv.MapKeys()
sort.Sort(mapKeys(keys))
for _, key := range keys {
val := fv.MapIndex(key)
if err := writeName(w, props); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
// open struct
if err := w.WriteByte('<'); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte('\n'); err != nil {
return err
}
}
w.indent()
// key
if _, err := w.WriteString("key:"); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if err := tm.writeAny(w, key, props.mkeyprop); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
// nil values aren't legal, but we can avoid panicking because of them.
if val.Kind() != reflect.Ptr || !val.IsNil() {
// value
if _, err := w.WriteString("value:"); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if err := tm.writeAny(w, val, props.mvalprop); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
// close struct
w.unindent()
if err := w.WriteByte('>'); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
continue
}
if props.proto3 && fv.Kind() == reflect.Slice && fv.Len() == 0 {
// empty bytes field
continue
}
if fv.Kind() != reflect.Ptr && fv.Kind() != reflect.Slice {
// proto3 non-repeated scalar field; skip if zero value
if isProto3Zero(fv) {
continue
}
}
if fv.Kind() == reflect.Interface {
// Check if it is a oneof.
if st.Field(i).Tag.Get("protobuf_oneof") != "" {
// fv is nil, or holds a pointer to generated struct.
// That generated struct has exactly one field,
// which has a protobuf struct tag.
if fv.IsNil() {
continue
}
inner := fv.Elem().Elem() // interface -> *T -> T
tag := inner.Type().Field(0).Tag.Get("protobuf")
props = new(Properties) // Overwrite the outer props var, but not its pointee.
props.Parse(tag)
// Write the value in the oneof, not the oneof itself.
fv = inner.Field(0)
// Special case to cope with malformed messages gracefully:
// If the value in the oneof is a nil pointer, don't panic
// in writeAny.
if fv.Kind() == reflect.Ptr && fv.IsNil() {
// Use errors.New so writeAny won't render quotes.
msg := errors.New("/* nil */")
fv = reflect.ValueOf(&msg).Elem()
}
}
}
if err := writeName(w, props); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if b, ok := fv.Interface().(raw); ok {
if err := writeRaw(w, b.Bytes()); err != nil {
return err
}
continue
}
// Enums have a String method, so writeAny will work fine.
if err := tm.writeAny(w, fv, props); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
}
// Extensions (the XXX_extensions field).
pv := sv.Addr()
if _, ok := extendable(pv.Interface()); ok {
if err := tm.writeExtensions(w, pv); err != nil {
return err
}
}
return nil
}
// writeRaw writes an uninterpreted raw message.
func writeRaw(w *textWriter, b []byte) error {
if err := w.WriteByte('<'); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte('\n'); err != nil {
return err
}
}
w.indent()
if err := writeUnknownStruct(w, b); err != nil {
return err
}
w.unindent()
if err := w.WriteByte('>'); err != nil {
return err
}
return nil
}
// writeAny writes an arbitrary field.
func (tm *TextMarshaler) writeAny(w *textWriter, v reflect.Value, props *Properties) error {
v = reflect.Indirect(v)
// Floats have special cases.
if v.Kind() == reflect.Float32 || v.Kind() == reflect.Float64 {
x := v.Float()
var b []byte
switch {
case math.IsInf(x, 1):
b = posInf
case math.IsInf(x, -1):
b = negInf
case math.IsNaN(x):
b = nan
}
if b != nil {
_, err := w.Write(b)
return err
}
// Other values are handled below.
}
// We don't attempt to serialise every possible value type; only those
// that can occur in protocol buffers.
switch v.Kind() {
case reflect.Slice:
// Should only be a []byte; repeated fields are handled in writeStruct.
if err := writeString(w, string(v.Bytes())); err != nil {
return err
}
case reflect.String:
if err := writeString(w, v.String()); err != nil {
return err
}
case reflect.Struct:
// Required/optional group/message.
var bra, ket byte = '<', '>'
if props != nil && props.Wire == "group" {
bra, ket = '{', '}'
}
if err := w.WriteByte(bra); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte('\n'); err != nil {
return err
}
}
w.indent()
if etm, ok := v.Interface().(encoding.TextMarshaler); ok {
text, err := etm.MarshalText()
if err != nil {
return err
}
if _, err = w.Write(text); err != nil {
return err
}
} else if err := tm.writeStruct(w, v); err != nil {
return err
}
w.unindent()
if err := w.WriteByte(ket); err != nil {
return err
}
default:
_, err := fmt.Fprint(w, v.Interface())
return err
}
return nil
}
// equivalent to C's isprint.
func isprint(c byte) bool {
return c >= 0x20 && c < 0x7f
}
// writeString writes a string in the protocol buffer text format.
// It is similar to strconv.Quote except we don't use Go escape sequences,
// we treat the string as a byte sequence, and we use octal escapes.
// These differences are to maintain interoperability with the other
// languages' implementations of the text format.
func writeString(w *textWriter, s string) error {
// use WriteByte here to get any needed indent
if err := w.WriteByte('"'); err != nil {
return err
}
// Loop over the bytes, not the runes.
for i := 0; i < len(s); i++ {
var err error
// Divergence from C++: we don't escape apostrophes.
// There's no need to escape them, and the C++ parser
// copes with a naked apostrophe.
switch c := s[i]; c {
case '\n':
_, err = w.w.Write(backslashN)
case '\r':
_, err = w.w.Write(backslashR)
case '\t':
_, err = w.w.Write(backslashT)
case '"':
_, err = w.w.Write(backslashDQ)
case '\\':
_, err = w.w.Write(backslashBS)
default:
if isprint(c) {
err = w.w.WriteByte(c)
} else {
_, err = fmt.Fprintf(w.w, "\\%03o", c)
}
}
if err != nil {
return err
}
}
return w.WriteByte('"')
}
func writeUnknownStruct(w *textWriter, data []byte) (err error) {
if !w.compact {
if _, err := fmt.Fprintf(w, "/* %d unknown bytes */\n", len(data)); err != nil {
return err
}
}
b := NewBuffer(data)
for b.index < len(b.buf) {
x, err := b.DecodeVarint()
if err != nil {
_, err := fmt.Fprintf(w, "/* %v */\n", err)
return err
}
wire, tag := x&7, x>>3
if wire == WireEndGroup {
w.unindent()
if _, err := w.Write(endBraceNewline); err != nil {
return err
}
continue
}
if _, err := fmt.Fprint(w, tag); err != nil {
return err
}
if wire != WireStartGroup {
if err := w.WriteByte(':'); err != nil {
return err
}
}
if !w.compact || wire == WireStartGroup {
if err := w.WriteByte(' '); err != nil {
return err
}
}
switch wire {
case WireBytes:
buf, e := b.DecodeRawBytes(false)
if e == nil {
_, err = fmt.Fprintf(w, "%q", buf)
} else {
_, err = fmt.Fprintf(w, "/* %v */", e)
}
case WireFixed32:
x, err = b.DecodeFixed32()
err = writeUnknownInt(w, x, err)
case WireFixed64:
x, err = b.DecodeFixed64()
err = writeUnknownInt(w, x, err)
case WireStartGroup:
err = w.WriteByte('{')
w.indent()
case WireVarint:
x, err = b.DecodeVarint()
err = writeUnknownInt(w, x, err)
default:
_, err = fmt.Fprintf(w, "/* unknown wire type %d */", wire)
}
if err != nil {
return err
}
if err = w.WriteByte('\n'); err != nil {
return err
}
}
return nil
}
func writeUnknownInt(w *textWriter, x uint64, err error) error {
if err == nil {
_, err = fmt.Fprint(w, x)
} else {
_, err = fmt.Fprintf(w, "/* %v */", err)
}
return err
}
type int32Slice []int32
func (s int32Slice) Len() int { return len(s) }
func (s int32Slice) Less(i, j int) bool { return s[i] < s[j] }
func (s int32Slice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
// writeExtensions writes all the extensions in pv.
// pv is assumed to be a pointer to a protocol message struct that is extendable.
func (tm *TextMarshaler) writeExtensions(w *textWriter, pv reflect.Value) error {
emap := extensionMaps[pv.Type().Elem()]
ep, _ := extendable(pv.Interface())
// Order the extensions by ID.
// This isn't strictly necessary, but it will give us
// canonical output, which will also make testing easier.
m, mu := ep.extensionsRead()
if m == nil {
return nil
}
mu.Lock()
ids := make([]int32, 0, len(m))
for id := range m {
ids = append(ids, id)
}
sort.Sort(int32Slice(ids))
mu.Unlock()
for _, extNum := range ids {
ext := m[extNum]
var desc *ExtensionDesc
if emap != nil {
desc = emap[extNum]
}
if desc == nil {
// Unknown extension.
if err := writeUnknownStruct(w, ext.enc); err != nil {
return err
}
continue
}
pb, err := GetExtension(ep, desc)
if err != nil {
return fmt.Errorf("failed getting extension: %v", err)
}
// Repeated extensions will appear as a slice.
if !desc.repeated() {
if err := tm.writeExtension(w, desc.Name, pb); err != nil {
return err
}
} else {
v := reflect.ValueOf(pb)
for i := 0; i < v.Len(); i++ {
if err := tm.writeExtension(w, desc.Name, v.Index(i).Interface()); err != nil {
return err
}
}
}
}
return nil
}
func (tm *TextMarshaler) writeExtension(w *textWriter, name string, pb interface{}) error {
if _, err := fmt.Fprintf(w, "[%s]:", name); err != nil {
return err
}
if !w.compact {
if err := w.WriteByte(' '); err != nil {
return err
}
}
if err := tm.writeAny(w, reflect.ValueOf(pb), nil); err != nil {
return err
}
if err := w.WriteByte('\n'); err != nil {
return err
}
return nil
}
func (w *textWriter) writeIndent() {
if !w.complete {
return
}
remain := w.ind * 2
for remain > 0 {
n := remain
if n > len(spaces) {
n = len(spaces)
}
w.w.Write(spaces[:n])
remain -= n
}
w.complete = false
}
// TextMarshaler is a configurable text format marshaler.
type TextMarshaler struct {
Compact bool // use compact text format (one line).
ExpandAny bool // expand google.protobuf.Any messages of known types
}
// Marshal writes a given protocol buffer in text format.
// The only errors returned are from w.
func (tm *TextMarshaler) Marshal(w io.Writer, pb Message) error {
val := reflect.ValueOf(pb)
if pb == nil || val.IsNil() {
w.Write([]byte("<nil>"))
return nil
}
var bw *bufio.Writer
ww, ok := w.(writer)
if !ok {
bw = bufio.NewWriter(w)
ww = bw
}
aw := &textWriter{
w: ww,
complete: true,
compact: tm.Compact,
}
if etm, ok := pb.(encoding.TextMarshaler); ok {
text, err := etm.MarshalText()
if err != nil {
return err
}
if _, err = aw.Write(text); err != nil {
return err
}
if bw != nil {
return bw.Flush()
}
return nil
}
// Dereference the received pointer so we don't have outer < and >.
v := reflect.Indirect(val)
if err := tm.writeStruct(aw, v); err != nil {
return err
}
if bw != nil {
return bw.Flush()
}
return nil
}
// Text is the same as Marshal, but returns the string directly.
func (tm *TextMarshaler) Text(pb Message) string {
var buf bytes.Buffer
tm.Marshal(&buf, pb)
return buf.String()
}
var (
defaultTextMarshaler = TextMarshaler{}
compactTextMarshaler = TextMarshaler{Compact: true}
)
// TODO: consider removing some of the Marshal functions below.
// MarshalText writes a given protocol buffer in text format.
// The only errors returned are from w.
func MarshalText(w io.Writer, pb Message) error { return defaultTextMarshaler.Marshal(w, pb) }
// MarshalTextString is the same as MarshalText, but returns the string directly.
func MarshalTextString(pb Message) string { return defaultTextMarshaler.Text(pb) }
// CompactText writes a given protocol buffer in compact text format (one line).
func CompactText(w io.Writer, pb Message) error { return compactTextMarshaler.Marshal(w, pb) }
// CompactTextString is the same as CompactText, but returns the string directly.
func CompactTextString(pb Message) string { return compactTextMarshaler.Text(pb) }

895
vendor/github.com/golang/protobuf/proto/text_parser.go generated vendored Normal file
View File

@ -0,0 +1,895 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto
// Functions for parsing the Text protocol buffer format.
// TODO: message sets.
import (
"encoding"
"errors"
"fmt"
"reflect"
"strconv"
"strings"
"unicode/utf8"
)
// Error string emitted when deserializing Any and fields are already set
const anyRepeatedlyUnpacked = "Any message unpacked multiple times, or %q already set"
type ParseError struct {
Message string
Line int // 1-based line number
Offset int // 0-based byte offset from start of input
}
func (p *ParseError) Error() string {
if p.Line == 1 {
// show offset only for first line
return fmt.Sprintf("line 1.%d: %v", p.Offset, p.Message)
}
return fmt.Sprintf("line %d: %v", p.Line, p.Message)
}
type token struct {
value string
err *ParseError
line int // line number
offset int // byte number from start of input, not start of line
unquoted string // the unquoted version of value, if it was a quoted string
}
func (t *token) String() string {
if t.err == nil {
return fmt.Sprintf("%q (line=%d, offset=%d)", t.value, t.line, t.offset)
}
return fmt.Sprintf("parse error: %v", t.err)
}
type textParser struct {
s string // remaining input
done bool // whether the parsing is finished (success or error)
backed bool // whether back() was called
offset, line int
cur token
}
func newTextParser(s string) *textParser {
p := new(textParser)
p.s = s
p.line = 1
p.cur.line = 1
return p
}
func (p *textParser) errorf(format string, a ...interface{}) *ParseError {
pe := &ParseError{fmt.Sprintf(format, a...), p.cur.line, p.cur.offset}
p.cur.err = pe
p.done = true
return pe
}
// Numbers and identifiers are matched by [-+._A-Za-z0-9]
func isIdentOrNumberChar(c byte) bool {
switch {
case 'A' <= c && c <= 'Z', 'a' <= c && c <= 'z':
return true
case '0' <= c && c <= '9':
return true
}
switch c {
case '-', '+', '.', '_':
return true
}
return false
}
func isWhitespace(c byte) bool {
switch c {
case ' ', '\t', '\n', '\r':
return true
}
return false
}
func isQuote(c byte) bool {
switch c {
case '"', '\'':
return true
}
return false
}
func (p *textParser) skipWhitespace() {
i := 0
for i < len(p.s) && (isWhitespace(p.s[i]) || p.s[i] == '#') {
if p.s[i] == '#' {
// comment; skip to end of line or input
for i < len(p.s) && p.s[i] != '\n' {
i++
}
if i == len(p.s) {
break
}
}
if p.s[i] == '\n' {
p.line++
}
i++
}
p.offset += i
p.s = p.s[i:len(p.s)]
if len(p.s) == 0 {
p.done = true
}
}
func (p *textParser) advance() {
// Skip whitespace
p.skipWhitespace()
if p.done {
return
}
// Start of non-whitespace
p.cur.err = nil
p.cur.offset, p.cur.line = p.offset, p.line
p.cur.unquoted = ""
switch p.s[0] {
case '<', '>', '{', '}', ':', '[', ']', ';', ',', '/':
// Single symbol
p.cur.value, p.s = p.s[0:1], p.s[1:len(p.s)]
case '"', '\'':
// Quoted string
i := 1
for i < len(p.s) && p.s[i] != p.s[0] && p.s[i] != '\n' {
if p.s[i] == '\\' && i+1 < len(p.s) {
// skip escaped char
i++
}
i++
}
if i >= len(p.s) || p.s[i] != p.s[0] {
p.errorf("unmatched quote")
return
}
unq, err := unquoteC(p.s[1:i], rune(p.s[0]))
if err != nil {
p.errorf("invalid quoted string %s: %v", p.s[0:i+1], err)
return
}
p.cur.value, p.s = p.s[0:i+1], p.s[i+1:len(p.s)]
p.cur.unquoted = unq
default:
i := 0
for i < len(p.s) && isIdentOrNumberChar(p.s[i]) {
i++
}
if i == 0 {
p.errorf("unexpected byte %#x", p.s[0])
return
}
p.cur.value, p.s = p.s[0:i], p.s[i:len(p.s)]
}
p.offset += len(p.cur.value)
}
var (
errBadUTF8 = errors.New("proto: bad UTF-8")
errBadHex = errors.New("proto: bad hexadecimal")
)
func unquoteC(s string, quote rune) (string, error) {
// This is based on C++'s tokenizer.cc.
// Despite its name, this is *not* parsing C syntax.
// For instance, "\0" is an invalid quoted string.
// Avoid allocation in trivial cases.
simple := true
for _, r := range s {
if r == '\\' || r == quote {
simple = false
break
}
}
if simple {
return s, nil
}
buf := make([]byte, 0, 3*len(s)/2)
for len(s) > 0 {
r, n := utf8.DecodeRuneInString(s)
if r == utf8.RuneError && n == 1 {
return "", errBadUTF8
}
s = s[n:]
if r != '\\' {
if r < utf8.RuneSelf {
buf = append(buf, byte(r))
} else {
buf = append(buf, string(r)...)
}
continue
}
ch, tail, err := unescape(s)
if err != nil {
return "", err
}
buf = append(buf, ch...)
s = tail
}
return string(buf), nil
}
func unescape(s string) (ch string, tail string, err error) {
r, n := utf8.DecodeRuneInString(s)
if r == utf8.RuneError && n == 1 {
return "", "", errBadUTF8
}
s = s[n:]
switch r {
case 'a':
return "\a", s, nil
case 'b':
return "\b", s, nil
case 'f':
return "\f", s, nil
case 'n':
return "\n", s, nil
case 'r':
return "\r", s, nil
case 't':
return "\t", s, nil
case 'v':
return "\v", s, nil
case '?':
return "?", s, nil // trigraph workaround
case '\'', '"', '\\':
return string(r), s, nil
case '0', '1', '2', '3', '4', '5', '6', '7', 'x', 'X':
if len(s) < 2 {
return "", "", fmt.Errorf(`\%c requires 2 following digits`, r)
}
base := 8
ss := s[:2]
s = s[2:]
if r == 'x' || r == 'X' {
base = 16
} else {
ss = string(r) + ss
}
i, err := strconv.ParseUint(ss, base, 8)
if err != nil {
return "", "", err
}
return string([]byte{byte(i)}), s, nil
case 'u', 'U':
n := 4
if r == 'U' {
n = 8
}
if len(s) < n {
return "", "", fmt.Errorf(`\%c requires %d digits`, r, n)
}
bs := make([]byte, n/2)
for i := 0; i < n; i += 2 {
a, ok1 := unhex(s[i])
b, ok2 := unhex(s[i+1])
if !ok1 || !ok2 {
return "", "", errBadHex
}
bs[i/2] = a<<4 | b
}
s = s[n:]
return string(bs), s, nil
}
return "", "", fmt.Errorf(`unknown escape \%c`, r)
}
// Adapted from src/pkg/strconv/quote.go.
func unhex(b byte) (v byte, ok bool) {
switch {
case '0' <= b && b <= '9':
return b - '0', true
case 'a' <= b && b <= 'f':
return b - 'a' + 10, true
case 'A' <= b && b <= 'F':
return b - 'A' + 10, true
}
return 0, false
}
// Back off the parser by one token. Can only be done between calls to next().
// It makes the next advance() a no-op.
func (p *textParser) back() { p.backed = true }
// Advances the parser and returns the new current token.
func (p *textParser) next() *token {
if p.backed || p.done {
p.backed = false
return &p.cur
}
p.advance()
if p.done {
p.cur.value = ""
} else if len(p.cur.value) > 0 && isQuote(p.cur.value[0]) {
// Look for multiple quoted strings separated by whitespace,
// and concatenate them.
cat := p.cur
for {
p.skipWhitespace()
if p.done || !isQuote(p.s[0]) {
break
}
p.advance()
if p.cur.err != nil {
return &p.cur
}
cat.value += " " + p.cur.value
cat.unquoted += p.cur.unquoted
}
p.done = false // parser may have seen EOF, but we want to return cat
p.cur = cat
}
return &p.cur
}
func (p *textParser) consumeToken(s string) error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != s {
p.back()
return p.errorf("expected %q, found %q", s, tok.value)
}
return nil
}
// Return a RequiredNotSetError indicating which required field was not set.
func (p *textParser) missingRequiredFieldError(sv reflect.Value) *RequiredNotSetError {
st := sv.Type()
sprops := GetProperties(st)
for i := 0; i < st.NumField(); i++ {
if !isNil(sv.Field(i)) {
continue
}
props := sprops.Prop[i]
if props.Required {
return &RequiredNotSetError{fmt.Sprintf("%v.%v", st, props.OrigName)}
}
}
return &RequiredNotSetError{fmt.Sprintf("%v.<unknown field name>", st)} // should not happen
}
// Returns the index in the struct for the named field, as well as the parsed tag properties.
func structFieldByName(sprops *StructProperties, name string) (int, *Properties, bool) {
i, ok := sprops.decoderOrigNames[name]
if ok {
return i, sprops.Prop[i], true
}
return -1, nil, false
}
// Consume a ':' from the input stream (if the next token is a colon),
// returning an error if a colon is needed but not present.
func (p *textParser) checkForColon(props *Properties, typ reflect.Type) *ParseError {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != ":" {
// Colon is optional when the field is a group or message.
needColon := true
switch props.Wire {
case "group":
needColon = false
case "bytes":
// A "bytes" field is either a message, a string, or a repeated field;
// those three become *T, *string and []T respectively, so we can check for
// this field being a pointer to a non-string.
if typ.Kind() == reflect.Ptr {
// *T or *string
if typ.Elem().Kind() == reflect.String {
break
}
} else if typ.Kind() == reflect.Slice {
// []T or []*T
if typ.Elem().Kind() != reflect.Ptr {
break
}
} else if typ.Kind() == reflect.String {
// The proto3 exception is for a string field,
// which requires a colon.
break
}
needColon = false
}
if needColon {
return p.errorf("expected ':', found %q", tok.value)
}
p.back()
}
return nil
}
func (p *textParser) readStruct(sv reflect.Value, terminator string) error {
st := sv.Type()
sprops := GetProperties(st)
reqCount := sprops.reqCount
var reqFieldErr error
fieldSet := make(map[string]bool)
// A struct is a sequence of "name: value", terminated by one of
// '>' or '}', or the end of the input. A name may also be
// "[extension]" or "[type/url]".
//
// The whole struct can also be an expanded Any message, like:
// [type/url] < ... struct contents ... >
for {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == terminator {
break
}
if tok.value == "[" {
// Looks like an extension or an Any.
//
// TODO: Check whether we need to handle
// namespace rooted names (e.g. ".something.Foo").
extName, err := p.consumeExtName()
if err != nil {
return err
}
if s := strings.LastIndex(extName, "/"); s >= 0 {
// If it contains a slash, it's an Any type URL.
messageName := extName[s+1:]
mt := MessageType(messageName)
if mt == nil {
return p.errorf("unrecognized message %q in google.protobuf.Any", messageName)
}
tok = p.next()
if tok.err != nil {
return tok.err
}
// consume an optional colon
if tok.value == ":" {
tok = p.next()
if tok.err != nil {
return tok.err
}
}
var terminator string
switch tok.value {
case "<":
terminator = ">"
case "{":
terminator = "}"
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
v := reflect.New(mt.Elem())
if pe := p.readStruct(v.Elem(), terminator); pe != nil {
return pe
}
b, err := Marshal(v.Interface().(Message))
if err != nil {
return p.errorf("failed to marshal message of type %q: %v", messageName, err)
}
if fieldSet["type_url"] {
return p.errorf(anyRepeatedlyUnpacked, "type_url")
}
if fieldSet["value"] {
return p.errorf(anyRepeatedlyUnpacked, "value")
}
sv.FieldByName("TypeUrl").SetString(extName)
sv.FieldByName("Value").SetBytes(b)
fieldSet["type_url"] = true
fieldSet["value"] = true
continue
}
var desc *ExtensionDesc
// This could be faster, but it's functional.
// TODO: Do something smarter than a linear scan.
for _, d := range RegisteredExtensions(reflect.New(st).Interface().(Message)) {
if d.Name == extName {
desc = d
break
}
}
if desc == nil {
return p.errorf("unrecognized extension %q", extName)
}
props := &Properties{}
props.Parse(desc.Tag)
typ := reflect.TypeOf(desc.ExtensionType)
if err := p.checkForColon(props, typ); err != nil {
return err
}
rep := desc.repeated()
// Read the extension structure, and set it in
// the value we're constructing.
var ext reflect.Value
if !rep {
ext = reflect.New(typ).Elem()
} else {
ext = reflect.New(typ.Elem()).Elem()
}
if err := p.readAny(ext, props); err != nil {
if _, ok := err.(*RequiredNotSetError); !ok {
return err
}
reqFieldErr = err
}
ep := sv.Addr().Interface().(Message)
if !rep {
SetExtension(ep, desc, ext.Interface())
} else {
old, err := GetExtension(ep, desc)
var sl reflect.Value
if err == nil {
sl = reflect.ValueOf(old) // existing slice
} else {
sl = reflect.MakeSlice(typ, 0, 1)
}
sl = reflect.Append(sl, ext)
SetExtension(ep, desc, sl.Interface())
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
continue
}
// This is a normal, non-extension field.
name := tok.value
var dst reflect.Value
fi, props, ok := structFieldByName(sprops, name)
if ok {
dst = sv.Field(fi)
} else if oop, ok := sprops.OneofTypes[name]; ok {
// It is a oneof.
props = oop.Prop
nv := reflect.New(oop.Type.Elem())
dst = nv.Elem().Field(0)
field := sv.Field(oop.Field)
if !field.IsNil() {
return p.errorf("field '%s' would overwrite already parsed oneof '%s'", name, sv.Type().Field(oop.Field).Name)
}
field.Set(nv)
}
if !dst.IsValid() {
return p.errorf("unknown field name %q in %v", name, st)
}
if dst.Kind() == reflect.Map {
// Consume any colon.
if err := p.checkForColon(props, dst.Type()); err != nil {
return err
}
// Construct the map if it doesn't already exist.
if dst.IsNil() {
dst.Set(reflect.MakeMap(dst.Type()))
}
key := reflect.New(dst.Type().Key()).Elem()
val := reflect.New(dst.Type().Elem()).Elem()
// The map entry should be this sequence of tokens:
// < key : KEY value : VALUE >
// However, implementations may omit key or value, and technically
// we should support them in any order. See b/28924776 for a time
// this went wrong.
tok := p.next()
var terminator string
switch tok.value {
case "<":
terminator = ">"
case "{":
terminator = "}"
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
for {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == terminator {
break
}
switch tok.value {
case "key":
if err := p.consumeToken(":"); err != nil {
return err
}
if err := p.readAny(key, props.mkeyprop); err != nil {
return err
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
case "value":
if err := p.checkForColon(props.mvalprop, dst.Type().Elem()); err != nil {
return err
}
if err := p.readAny(val, props.mvalprop); err != nil {
return err
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
default:
p.back()
return p.errorf(`expected "key", "value", or %q, found %q`, terminator, tok.value)
}
}
dst.SetMapIndex(key, val)
continue
}
// Check that it's not already set if it's not a repeated field.
if !props.Repeated && fieldSet[name] {
return p.errorf("non-repeated field %q was repeated", name)
}
if err := p.checkForColon(props, dst.Type()); err != nil {
return err
}
// Parse into the field.
fieldSet[name] = true
if err := p.readAny(dst, props); err != nil {
if _, ok := err.(*RequiredNotSetError); !ok {
return err
}
reqFieldErr = err
}
if props.Required {
reqCount--
}
if err := p.consumeOptionalSeparator(); err != nil {
return err
}
}
if reqCount > 0 {
return p.missingRequiredFieldError(sv)
}
return reqFieldErr
}
// consumeExtName consumes extension name or expanded Any type URL and the
// following ']'. It returns the name or URL consumed.
func (p *textParser) consumeExtName() (string, error) {
tok := p.next()
if tok.err != nil {
return "", tok.err
}
// If extension name or type url is quoted, it's a single token.
if len(tok.value) > 2 && isQuote(tok.value[0]) && tok.value[len(tok.value)-1] == tok.value[0] {
name, err := unquoteC(tok.value[1:len(tok.value)-1], rune(tok.value[0]))
if err != nil {
return "", err
}
return name, p.consumeToken("]")
}
// Consume everything up to "]"
var parts []string
for tok.value != "]" {
parts = append(parts, tok.value)
tok = p.next()
if tok.err != nil {
return "", p.errorf("unrecognized type_url or extension name: %s", tok.err)
}
}
return strings.Join(parts, ""), nil
}
// consumeOptionalSeparator consumes an optional semicolon or comma.
// It is used in readStruct to provide backward compatibility.
func (p *textParser) consumeOptionalSeparator() error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value != ";" && tok.value != "," {
p.back()
}
return nil
}
func (p *textParser) readAny(v reflect.Value, props *Properties) error {
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == "" {
return p.errorf("unexpected EOF")
}
switch fv := v; fv.Kind() {
case reflect.Slice:
at := v.Type()
if at.Elem().Kind() == reflect.Uint8 {
// Special case for []byte
if tok.value[0] != '"' && tok.value[0] != '\'' {
// Deliberately written out here, as the error after
// this switch statement would write "invalid []byte: ...",
// which is not as user-friendly.
return p.errorf("invalid string: %v", tok.value)
}
bytes := []byte(tok.unquoted)
fv.Set(reflect.ValueOf(bytes))
return nil
}
// Repeated field.
if tok.value == "[" {
// Repeated field with list notation, like [1,2,3].
for {
fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem()))
err := p.readAny(fv.Index(fv.Len()-1), props)
if err != nil {
return err
}
tok := p.next()
if tok.err != nil {
return tok.err
}
if tok.value == "]" {
break
}
if tok.value != "," {
return p.errorf("Expected ']' or ',' found %q", tok.value)
}
}
return nil
}
// One value of the repeated field.
p.back()
fv.Set(reflect.Append(fv, reflect.New(at.Elem()).Elem()))
return p.readAny(fv.Index(fv.Len()-1), props)
case reflect.Bool:
// true/1/t/True or false/f/0/False.
switch tok.value {
case "true", "1", "t", "True":
fv.SetBool(true)
return nil
case "false", "0", "f", "False":
fv.SetBool(false)
return nil
}
case reflect.Float32, reflect.Float64:
v := tok.value
// Ignore 'f' for compatibility with output generated by C++, but don't
// remove 'f' when the value is "-inf" or "inf".
if strings.HasSuffix(v, "f") && tok.value != "-inf" && tok.value != "inf" {
v = v[:len(v)-1]
}
if f, err := strconv.ParseFloat(v, fv.Type().Bits()); err == nil {
fv.SetFloat(f)
return nil
}
case reflect.Int32:
if x, err := strconv.ParseInt(tok.value, 0, 32); err == nil {
fv.SetInt(x)
return nil
}
if len(props.Enum) == 0 {
break
}
m, ok := enumValueMaps[props.Enum]
if !ok {
break
}
x, ok := m[tok.value]
if !ok {
break
}
fv.SetInt(int64(x))
return nil
case reflect.Int64:
if x, err := strconv.ParseInt(tok.value, 0, 64); err == nil {
fv.SetInt(x)
return nil
}
case reflect.Ptr:
// A basic field (indirected through pointer), or a repeated message/group
p.back()
fv.Set(reflect.New(fv.Type().Elem()))
return p.readAny(fv.Elem(), props)
case reflect.String:
if tok.value[0] == '"' || tok.value[0] == '\'' {
fv.SetString(tok.unquoted)
return nil
}
case reflect.Struct:
var terminator string
switch tok.value {
case "{":
terminator = "}"
case "<":
terminator = ">"
default:
return p.errorf("expected '{' or '<', found %q", tok.value)
}
// TODO: Handle nested messages which implement encoding.TextUnmarshaler.
return p.readStruct(fv, terminator)
case reflect.Uint32:
if x, err := strconv.ParseUint(tok.value, 0, 32); err == nil {
fv.SetUint(x)
return nil
}
case reflect.Uint64:
if x, err := strconv.ParseUint(tok.value, 0, 64); err == nil {
fv.SetUint(x)
return nil
}
}
return p.errorf("invalid %v: %v", v.Type(), tok.value)
}
// UnmarshalText reads a protocol buffer in Text format. UnmarshalText resets pb
// before starting to unmarshal, so any existing data in pb is always removed.
// If a required field is not set and no other error occurs,
// UnmarshalText returns *RequiredNotSetError.
func UnmarshalText(s string, pb Message) error {
if um, ok := pb.(encoding.TextUnmarshaler); ok {
err := um.UnmarshalText([]byte(s))
return err
}
pb.Reset()
v := reflect.ValueOf(pb)
if pe := newTextParser(s).readStruct(v.Elem(), ""); pe != nil {
return pe
}
return nil
}

View File

@ -0,0 +1,673 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto_test
import (
"math"
"reflect"
"testing"
. "github.com/golang/protobuf/proto"
proto3pb "github.com/golang/protobuf/proto/proto3_proto"
. "github.com/golang/protobuf/proto/testdata"
)
type UnmarshalTextTest struct {
in string
err string // if "", no error expected
out *MyMessage
}
func buildExtStructTest(text string) UnmarshalTextTest {
msg := &MyMessage{
Count: Int32(42),
}
SetExtension(msg, E_Ext_More, &Ext{
Data: String("Hello, world!"),
})
return UnmarshalTextTest{in: text, out: msg}
}
func buildExtDataTest(text string) UnmarshalTextTest {
msg := &MyMessage{
Count: Int32(42),
}
SetExtension(msg, E_Ext_Text, String("Hello, world!"))
SetExtension(msg, E_Ext_Number, Int32(1729))
return UnmarshalTextTest{in: text, out: msg}
}
func buildExtRepStringTest(text string) UnmarshalTextTest {
msg := &MyMessage{
Count: Int32(42),
}
if err := SetExtension(msg, E_Greeting, []string{"bula", "hola"}); err != nil {
panic(err)
}
return UnmarshalTextTest{in: text, out: msg}
}
var unMarshalTextTests = []UnmarshalTextTest{
// Basic
{
in: " count:42\n name:\"Dave\" ",
out: &MyMessage{
Count: Int32(42),
Name: String("Dave"),
},
},
// Empty quoted string
{
in: `count:42 name:""`,
out: &MyMessage{
Count: Int32(42),
Name: String(""),
},
},
// Quoted string concatenation with double quotes
{
in: `count:42 name: "My name is "` + "\n" + `"elsewhere"`,
out: &MyMessage{
Count: Int32(42),
Name: String("My name is elsewhere"),
},
},
// Quoted string concatenation with single quotes
{
in: "count:42 name: 'My name is '\n'elsewhere'",
out: &MyMessage{
Count: Int32(42),
Name: String("My name is elsewhere"),
},
},
// Quoted string concatenations with mixed quotes
{
in: "count:42 name: 'My name is '\n\"elsewhere\"",
out: &MyMessage{
Count: Int32(42),
Name: String("My name is elsewhere"),
},
},
{
in: "count:42 name: \"My name is \"\n'elsewhere'",
out: &MyMessage{
Count: Int32(42),
Name: String("My name is elsewhere"),
},
},
// Quoted string with escaped apostrophe
{
in: `count:42 name: "HOLIDAY - New Year\'s Day"`,
out: &MyMessage{
Count: Int32(42),
Name: String("HOLIDAY - New Year's Day"),
},
},
// Quoted string with single quote
{
in: `count:42 name: 'Roger "The Ramster" Ramjet'`,
out: &MyMessage{
Count: Int32(42),
Name: String(`Roger "The Ramster" Ramjet`),
},
},
// Quoted string with all the accepted special characters from the C++ test
{
in: `count:42 name: ` + "\"\\\"A string with \\' characters \\n and \\r newlines and \\t tabs and \\001 slashes \\\\ and multiple spaces\"",
out: &MyMessage{
Count: Int32(42),
Name: String("\"A string with ' characters \n and \r newlines and \t tabs and \001 slashes \\ and multiple spaces"),
},
},
// Quoted string with quoted backslash
{
in: `count:42 name: "\\'xyz"`,
out: &MyMessage{
Count: Int32(42),
Name: String(`\'xyz`),
},
},
// Quoted string with UTF-8 bytes.
{
in: "count:42 name: '\303\277\302\201\xAB'",
out: &MyMessage{
Count: Int32(42),
Name: String("\303\277\302\201\xAB"),
},
},
// Bad quoted string
{
in: `inner: < host: "\0" >` + "\n",
err: `line 1.15: invalid quoted string "\0": \0 requires 2 following digits`,
},
// Number too large for int64
{
in: "count: 1 others { key: 123456789012345678901 }",
err: "line 1.23: invalid int64: 123456789012345678901",
},
// Number too large for int32
{
in: "count: 1234567890123",
err: "line 1.7: invalid int32: 1234567890123",
},
// Number in hexadecimal
{
in: "count: 0x2beef",
out: &MyMessage{
Count: Int32(0x2beef),
},
},
// Number in octal
{
in: "count: 024601",
out: &MyMessage{
Count: Int32(024601),
},
},
// Floating point number with "f" suffix
{
in: "count: 4 others:< weight: 17.0f >",
out: &MyMessage{
Count: Int32(4),
Others: []*OtherMessage{
{
Weight: Float32(17),
},
},
},
},
// Floating point positive infinity
{
in: "count: 4 bigfloat: inf",
out: &MyMessage{
Count: Int32(4),
Bigfloat: Float64(math.Inf(1)),
},
},
// Floating point negative infinity
{
in: "count: 4 bigfloat: -inf",
out: &MyMessage{
Count: Int32(4),
Bigfloat: Float64(math.Inf(-1)),
},
},
// Number too large for float32
{
in: "others:< weight: 12345678901234567890123456789012345678901234567890 >",
err: "line 1.17: invalid float32: 12345678901234567890123456789012345678901234567890",
},
// Number posing as a quoted string
{
in: `inner: < host: 12 >` + "\n",
err: `line 1.15: invalid string: 12`,
},
// Quoted string posing as int32
{
in: `count: "12"`,
err: `line 1.7: invalid int32: "12"`,
},
// Quoted string posing a float32
{
in: `others:< weight: "17.4" >`,
err: `line 1.17: invalid float32: "17.4"`,
},
// Enum
{
in: `count:42 bikeshed: BLUE`,
out: &MyMessage{
Count: Int32(42),
Bikeshed: MyMessage_BLUE.Enum(),
},
},
// Repeated field
{
in: `count:42 pet: "horsey" pet:"bunny"`,
out: &MyMessage{
Count: Int32(42),
Pet: []string{"horsey", "bunny"},
},
},
// Repeated field with list notation
{
in: `count:42 pet: ["horsey", "bunny"]`,
out: &MyMessage{
Count: Int32(42),
Pet: []string{"horsey", "bunny"},
},
},
// Repeated message with/without colon and <>/{}
{
in: `count:42 others:{} others{} others:<> others:{}`,
out: &MyMessage{
Count: Int32(42),
Others: []*OtherMessage{
{},
{},
{},
{},
},
},
},
// Missing colon for inner message
{
in: `count:42 inner < host: "cauchy.syd" >`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("cauchy.syd"),
},
},
},
// Missing colon for string field
{
in: `name "Dave"`,
err: `line 1.5: expected ':', found "\"Dave\""`,
},
// Missing colon for int32 field
{
in: `count 42`,
err: `line 1.6: expected ':', found "42"`,
},
// Missing required field
{
in: `name: "Pawel"`,
err: `proto: required field "testdata.MyMessage.count" not set`,
out: &MyMessage{
Name: String("Pawel"),
},
},
// Missing required field in a required submessage
{
in: `count: 42 we_must_go_deeper < leo_finally_won_an_oscar <> >`,
err: `proto: required field "testdata.InnerMessage.host" not set`,
out: &MyMessage{
Count: Int32(42),
WeMustGoDeeper: &RequiredInnerMessage{LeoFinallyWonAnOscar: &InnerMessage{}},
},
},
// Repeated non-repeated field
{
in: `name: "Rob" name: "Russ"`,
err: `line 1.12: non-repeated field "name" was repeated`,
},
// Group
{
in: `count: 17 SomeGroup { group_field: 12 }`,
out: &MyMessage{
Count: Int32(17),
Somegroup: &MyMessage_SomeGroup{
GroupField: Int32(12),
},
},
},
// Semicolon between fields
{
in: `count:3;name:"Calvin"`,
out: &MyMessage{
Count: Int32(3),
Name: String("Calvin"),
},
},
// Comma between fields
{
in: `count:4,name:"Ezekiel"`,
out: &MyMessage{
Count: Int32(4),
Name: String("Ezekiel"),
},
},
// Boolean false
{
in: `count:42 inner { host: "example.com" connected: false }`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("example.com"),
Connected: Bool(false),
},
},
},
// Boolean true
{
in: `count:42 inner { host: "example.com" connected: true }`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("example.com"),
Connected: Bool(true),
},
},
},
// Boolean 0
{
in: `count:42 inner { host: "example.com" connected: 0 }`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("example.com"),
Connected: Bool(false),
},
},
},
// Boolean 1
{
in: `count:42 inner { host: "example.com" connected: 1 }`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("example.com"),
Connected: Bool(true),
},
},
},
// Boolean f
{
in: `count:42 inner { host: "example.com" connected: f }`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("example.com"),
Connected: Bool(false),
},
},
},
// Boolean t
{
in: `count:42 inner { host: "example.com" connected: t }`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("example.com"),
Connected: Bool(true),
},
},
},
// Boolean False
{
in: `count:42 inner { host: "example.com" connected: False }`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("example.com"),
Connected: Bool(false),
},
},
},
// Boolean True
{
in: `count:42 inner { host: "example.com" connected: True }`,
out: &MyMessage{
Count: Int32(42),
Inner: &InnerMessage{
Host: String("example.com"),
Connected: Bool(true),
},
},
},
// Extension
buildExtStructTest(`count: 42 [testdata.Ext.more]:<data:"Hello, world!" >`),
buildExtStructTest(`count: 42 [testdata.Ext.more] {data:"Hello, world!"}`),
buildExtDataTest(`count: 42 [testdata.Ext.text]:"Hello, world!" [testdata.Ext.number]:1729`),
buildExtRepStringTest(`count: 42 [testdata.greeting]:"bula" [testdata.greeting]:"hola"`),
// Big all-in-one
{
in: "count:42 # Meaning\n" +
`name:"Dave" ` +
`quote:"\"I didn't want to go.\"" ` +
`pet:"bunny" ` +
`pet:"kitty" ` +
`pet:"horsey" ` +
`inner:<` +
` host:"footrest.syd" ` +
` port:7001 ` +
` connected:true ` +
`> ` +
`others:<` +
` key:3735928559 ` +
` value:"\x01A\a\f" ` +
`> ` +
`others:<` +
" weight:58.9 # Atomic weight of Co\n" +
` inner:<` +
` host:"lesha.mtv" ` +
` port:8002 ` +
` >` +
`>`,
out: &MyMessage{
Count: Int32(42),
Name: String("Dave"),
Quote: String(`"I didn't want to go."`),
Pet: []string{"bunny", "kitty", "horsey"},
Inner: &InnerMessage{
Host: String("footrest.syd"),
Port: Int32(7001),
Connected: Bool(true),
},
Others: []*OtherMessage{
{
Key: Int64(3735928559),
Value: []byte{0x1, 'A', '\a', '\f'},
},
{
Weight: Float32(58.9),
Inner: &InnerMessage{
Host: String("lesha.mtv"),
Port: Int32(8002),
},
},
},
},
},
}
func TestUnmarshalText(t *testing.T) {
for i, test := range unMarshalTextTests {
pb := new(MyMessage)
err := UnmarshalText(test.in, pb)
if test.err == "" {
// We don't expect failure.
if err != nil {
t.Errorf("Test %d: Unexpected error: %v", i, err)
} else if !reflect.DeepEqual(pb, test.out) {
t.Errorf("Test %d: Incorrect populated \nHave: %v\nWant: %v",
i, pb, test.out)
}
} else {
// We do expect failure.
if err == nil {
t.Errorf("Test %d: Didn't get expected error: %v", i, test.err)
} else if err.Error() != test.err {
t.Errorf("Test %d: Incorrect error.\nHave: %v\nWant: %v",
i, err.Error(), test.err)
} else if _, ok := err.(*RequiredNotSetError); ok && test.out != nil && !reflect.DeepEqual(pb, test.out) {
t.Errorf("Test %d: Incorrect populated \nHave: %v\nWant: %v",
i, pb, test.out)
}
}
}
}
func TestUnmarshalTextCustomMessage(t *testing.T) {
msg := &textMessage{}
if err := UnmarshalText("custom", msg); err != nil {
t.Errorf("Unexpected error from custom unmarshal: %v", err)
}
if UnmarshalText("not custom", msg) == nil {
t.Errorf("Didn't get expected error from custom unmarshal")
}
}
// Regression test; this caused a panic.
func TestRepeatedEnum(t *testing.T) {
pb := new(RepeatedEnum)
if err := UnmarshalText("color: RED", pb); err != nil {
t.Fatal(err)
}
exp := &RepeatedEnum{
Color: []RepeatedEnum_Color{RepeatedEnum_RED},
}
if !Equal(pb, exp) {
t.Errorf("Incorrect populated \nHave: %v\nWant: %v", pb, exp)
}
}
func TestProto3TextParsing(t *testing.T) {
m := new(proto3pb.Message)
const in = `name: "Wallace" true_scotsman: true`
want := &proto3pb.Message{
Name: "Wallace",
TrueScotsman: true,
}
if err := UnmarshalText(in, m); err != nil {
t.Fatal(err)
}
if !Equal(m, want) {
t.Errorf("\n got %v\nwant %v", m, want)
}
}
func TestMapParsing(t *testing.T) {
m := new(MessageWithMap)
const in = `name_mapping:<key:1234 value:"Feist"> name_mapping:<key:1 value:"Beatles">` +
`msg_mapping:<key:-4, value:<f: 2.0>,>` + // separating commas are okay
`msg_mapping<key:-2 value<f: 4.0>>` + // no colon after "value"
`msg_mapping:<value:<f: 5.0>>` + // omitted key
`msg_mapping:<key:1>` + // omitted value
`byte_mapping:<key:true value:"so be it">` +
`byte_mapping:<>` // omitted key and value
want := &MessageWithMap{
NameMapping: map[int32]string{
1: "Beatles",
1234: "Feist",
},
MsgMapping: map[int64]*FloatingPoint{
-4: {F: Float64(2.0)},
-2: {F: Float64(4.0)},
0: {F: Float64(5.0)},
1: nil,
},
ByteMapping: map[bool][]byte{
false: nil,
true: []byte("so be it"),
},
}
if err := UnmarshalText(in, m); err != nil {
t.Fatal(err)
}
if !Equal(m, want) {
t.Errorf("\n got %v\nwant %v", m, want)
}
}
func TestOneofParsing(t *testing.T) {
const in = `name:"Shrek"`
m := new(Communique)
want := &Communique{Union: &Communique_Name{"Shrek"}}
if err := UnmarshalText(in, m); err != nil {
t.Fatal(err)
}
if !Equal(m, want) {
t.Errorf("\n got %v\nwant %v", m, want)
}
const inOverwrite = `name:"Shrek" number:42`
m = new(Communique)
testErr := "line 1.13: field 'number' would overwrite already parsed oneof 'Union'"
if err := UnmarshalText(inOverwrite, m); err == nil {
t.Errorf("TestOneofParsing: Didn't get expected error: %v", testErr)
} else if err.Error() != testErr {
t.Errorf("TestOneofParsing: Incorrect error.\nHave: %v\nWant: %v",
err.Error(), testErr)
}
}
var benchInput string
func init() {
benchInput = "count: 4\n"
for i := 0; i < 1000; i++ {
benchInput += "pet: \"fido\"\n"
}
// Check it is valid input.
pb := new(MyMessage)
err := UnmarshalText(benchInput, pb)
if err != nil {
panic("Bad benchmark input: " + err.Error())
}
}
func BenchmarkUnmarshalText(b *testing.B) {
pb := new(MyMessage)
for i := 0; i < b.N; i++ {
UnmarshalText(benchInput, pb)
}
b.SetBytes(int64(len(benchInput)))
}

474
vendor/github.com/golang/protobuf/proto/text_test.go generated vendored Normal file
View File

@ -0,0 +1,474 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package proto_test
import (
"bytes"
"errors"
"io/ioutil"
"math"
"strings"
"testing"
"github.com/golang/protobuf/proto"
proto3pb "github.com/golang/protobuf/proto/proto3_proto"
pb "github.com/golang/protobuf/proto/testdata"
)
// textMessage implements the methods that allow it to marshal and unmarshal
// itself as text.
type textMessage struct {
}
func (*textMessage) MarshalText() ([]byte, error) {
return []byte("custom"), nil
}
func (*textMessage) UnmarshalText(bytes []byte) error {
if string(bytes) != "custom" {
return errors.New("expected 'custom'")
}
return nil
}
func (*textMessage) Reset() {}
func (*textMessage) String() string { return "" }
func (*textMessage) ProtoMessage() {}
func newTestMessage() *pb.MyMessage {
msg := &pb.MyMessage{
Count: proto.Int32(42),
Name: proto.String("Dave"),
Quote: proto.String(`"I didn't want to go."`),
Pet: []string{"bunny", "kitty", "horsey"},
Inner: &pb.InnerMessage{
Host: proto.String("footrest.syd"),
Port: proto.Int32(7001),
Connected: proto.Bool(true),
},
Others: []*pb.OtherMessage{
{
Key: proto.Int64(0xdeadbeef),
Value: []byte{1, 65, 7, 12},
},
{
Weight: proto.Float32(6.022),
Inner: &pb.InnerMessage{
Host: proto.String("lesha.mtv"),
Port: proto.Int32(8002),
},
},
},
Bikeshed: pb.MyMessage_BLUE.Enum(),
Somegroup: &pb.MyMessage_SomeGroup{
GroupField: proto.Int32(8),
},
// One normally wouldn't do this.
// This is an undeclared tag 13, as a varint (wire type 0) with value 4.
XXX_unrecognized: []byte{13<<3 | 0, 4},
}
ext := &pb.Ext{
Data: proto.String("Big gobs for big rats"),
}
if err := proto.SetExtension(msg, pb.E_Ext_More, ext); err != nil {
panic(err)
}
greetings := []string{"adg", "easy", "cow"}
if err := proto.SetExtension(msg, pb.E_Greeting, greetings); err != nil {
panic(err)
}
// Add an unknown extension. We marshal a pb.Ext, and fake the ID.
b, err := proto.Marshal(&pb.Ext{Data: proto.String("3G skiing")})
if err != nil {
panic(err)
}
b = append(proto.EncodeVarint(201<<3|proto.WireBytes), b...)
proto.SetRawExtension(msg, 201, b)
// Extensions can be plain fields, too, so let's test that.
b = append(proto.EncodeVarint(202<<3|proto.WireVarint), 19)
proto.SetRawExtension(msg, 202, b)
return msg
}
const text = `count: 42
name: "Dave"
quote: "\"I didn't want to go.\""
pet: "bunny"
pet: "kitty"
pet: "horsey"
inner: <
host: "footrest.syd"
port: 7001
connected: true
>
others: <
key: 3735928559
value: "\001A\007\014"
>
others: <
weight: 6.022
inner: <
host: "lesha.mtv"
port: 8002
>
>
bikeshed: BLUE
SomeGroup {
group_field: 8
}
/* 2 unknown bytes */
13: 4
[testdata.Ext.more]: <
data: "Big gobs for big rats"
>
[testdata.greeting]: "adg"
[testdata.greeting]: "easy"
[testdata.greeting]: "cow"
/* 13 unknown bytes */
201: "\t3G skiing"
/* 3 unknown bytes */
202: 19
`
func TestMarshalText(t *testing.T) {
buf := new(bytes.Buffer)
if err := proto.MarshalText(buf, newTestMessage()); err != nil {
t.Fatalf("proto.MarshalText: %v", err)
}
s := buf.String()
if s != text {
t.Errorf("Got:\n===\n%v===\nExpected:\n===\n%v===\n", s, text)
}
}
func TestMarshalTextCustomMessage(t *testing.T) {
buf := new(bytes.Buffer)
if err := proto.MarshalText(buf, &textMessage{}); err != nil {
t.Fatalf("proto.MarshalText: %v", err)
}
s := buf.String()
if s != "custom" {
t.Errorf("Got %q, expected %q", s, "custom")
}
}
func TestMarshalTextNil(t *testing.T) {
want := "<nil>"
tests := []proto.Message{nil, (*pb.MyMessage)(nil)}
for i, test := range tests {
buf := new(bytes.Buffer)
if err := proto.MarshalText(buf, test); err != nil {
t.Fatal(err)
}
if got := buf.String(); got != want {
t.Errorf("%d: got %q want %q", i, got, want)
}
}
}
func TestMarshalTextUnknownEnum(t *testing.T) {
// The Color enum only specifies values 0-2.
m := &pb.MyMessage{Bikeshed: pb.MyMessage_Color(3).Enum()}
got := m.String()
const want = `bikeshed:3 `
if got != want {
t.Errorf("\n got %q\nwant %q", got, want)
}
}
func TestTextOneof(t *testing.T) {
tests := []struct {
m proto.Message
want string
}{
// zero message
{&pb.Communique{}, ``},
// scalar field
{&pb.Communique{Union: &pb.Communique_Number{4}}, `number:4`},
// message field
{&pb.Communique{Union: &pb.Communique_Msg{
&pb.Strings{StringField: proto.String("why hello!")},
}}, `msg:<string_field:"why hello!" >`},
// bad oneof (should not panic)
{&pb.Communique{Union: &pb.Communique_Msg{nil}}, `msg:/* nil */`},
}
for _, test := range tests {
got := strings.TrimSpace(test.m.String())
if got != test.want {
t.Errorf("\n got %s\nwant %s", got, test.want)
}
}
}
func BenchmarkMarshalTextBuffered(b *testing.B) {
buf := new(bytes.Buffer)
m := newTestMessage()
for i := 0; i < b.N; i++ {
buf.Reset()
proto.MarshalText(buf, m)
}
}
func BenchmarkMarshalTextUnbuffered(b *testing.B) {
w := ioutil.Discard
m := newTestMessage()
for i := 0; i < b.N; i++ {
proto.MarshalText(w, m)
}
}
func compact(src string) string {
// s/[ \n]+/ /g; s/ $//;
dst := make([]byte, len(src))
space, comment := false, false
j := 0
for i := 0; i < len(src); i++ {
if strings.HasPrefix(src[i:], "/*") {
comment = true
i++
continue
}
if comment && strings.HasPrefix(src[i:], "*/") {
comment = false
i++
continue
}
if comment {
continue
}
c := src[i]
if c == ' ' || c == '\n' {
space = true
continue
}
if j > 0 && (dst[j-1] == ':' || dst[j-1] == '<' || dst[j-1] == '{') {
space = false
}
if c == '{' {
space = false
}
if space {
dst[j] = ' '
j++
space = false
}
dst[j] = c
j++
}
if space {
dst[j] = ' '
j++
}
return string(dst[0:j])
}
var compactText = compact(text)
func TestCompactText(t *testing.T) {
s := proto.CompactTextString(newTestMessage())
if s != compactText {
t.Errorf("Got:\n===\n%v===\nExpected:\n===\n%v\n===\n", s, compactText)
}
}
func TestStringEscaping(t *testing.T) {
testCases := []struct {
in *pb.Strings
out string
}{
{
// Test data from C++ test (TextFormatTest.StringEscape).
// Single divergence: we don't escape apostrophes.
&pb.Strings{StringField: proto.String("\"A string with ' characters \n and \r newlines and \t tabs and \001 slashes \\ and multiple spaces")},
"string_field: \"\\\"A string with ' characters \\n and \\r newlines and \\t tabs and \\001 slashes \\\\ and multiple spaces\"\n",
},
{
// Test data from the same C++ test.
&pb.Strings{StringField: proto.String("\350\260\267\346\255\214")},
"string_field: \"\\350\\260\\267\\346\\255\\214\"\n",
},
{
// Some UTF-8.
&pb.Strings{StringField: proto.String("\x00\x01\xff\x81")},
`string_field: "\000\001\377\201"` + "\n",
},
}
for i, tc := range testCases {
var buf bytes.Buffer
if err := proto.MarshalText(&buf, tc.in); err != nil {
t.Errorf("proto.MarsalText: %v", err)
continue
}
s := buf.String()
if s != tc.out {
t.Errorf("#%d: Got:\n%s\nExpected:\n%s\n", i, s, tc.out)
continue
}
// Check round-trip.
pb := new(pb.Strings)
if err := proto.UnmarshalText(s, pb); err != nil {
t.Errorf("#%d: UnmarshalText: %v", i, err)
continue
}
if !proto.Equal(pb, tc.in) {
t.Errorf("#%d: Round-trip failed:\nstart: %v\n end: %v", i, tc.in, pb)
}
}
}
// A limitedWriter accepts some output before it fails.
// This is a proxy for something like a nearly-full or imminently-failing disk,
// or a network connection that is about to die.
type limitedWriter struct {
b bytes.Buffer
limit int
}
var outOfSpace = errors.New("proto: insufficient space")
func (w *limitedWriter) Write(p []byte) (n int, err error) {
var avail = w.limit - w.b.Len()
if avail <= 0 {
return 0, outOfSpace
}
if len(p) <= avail {
return w.b.Write(p)
}
n, _ = w.b.Write(p[:avail])
return n, outOfSpace
}
func TestMarshalTextFailing(t *testing.T) {
// Try lots of different sizes to exercise more error code-paths.
for lim := 0; lim < len(text); lim++ {
buf := new(limitedWriter)
buf.limit = lim
err := proto.MarshalText(buf, newTestMessage())
// We expect a certain error, but also some partial results in the buffer.
if err != outOfSpace {
t.Errorf("Got:\n===\n%v===\nExpected:\n===\n%v===\n", err, outOfSpace)
}
s := buf.b.String()
x := text[:buf.limit]
if s != x {
t.Errorf("Got:\n===\n%v===\nExpected:\n===\n%v===\n", s, x)
}
}
}
func TestFloats(t *testing.T) {
tests := []struct {
f float64
want string
}{
{0, "0"},
{4.7, "4.7"},
{math.Inf(1), "inf"},
{math.Inf(-1), "-inf"},
{math.NaN(), "nan"},
}
for _, test := range tests {
msg := &pb.FloatingPoint{F: &test.f}
got := strings.TrimSpace(msg.String())
want := `f:` + test.want
if got != want {
t.Errorf("f=%f: got %q, want %q", test.f, got, want)
}
}
}
func TestRepeatedNilText(t *testing.T) {
m := &pb.MessageList{
Message: []*pb.MessageList_Message{
nil,
&pb.MessageList_Message{
Name: proto.String("Horse"),
},
nil,
},
}
want := `Message <nil>
Message {
name: "Horse"
}
Message <nil>
`
if s := proto.MarshalTextString(m); s != want {
t.Errorf(" got: %s\nwant: %s", s, want)
}
}
func TestProto3Text(t *testing.T) {
tests := []struct {
m proto.Message
want string
}{
// zero message
{&proto3pb.Message{}, ``},
// zero message except for an empty byte slice
{&proto3pb.Message{Data: []byte{}}, ``},
// trivial case
{&proto3pb.Message{Name: "Rob", HeightInCm: 175}, `name:"Rob" height_in_cm:175`},
// empty map
{&pb.MessageWithMap{}, ``},
// non-empty map; map format is the same as a repeated struct,
// and they are sorted by key (numerically for numeric keys).
{
&pb.MessageWithMap{NameMapping: map[int32]string{
-1: "Negatory",
7: "Lucky",
1234: "Feist",
6345789: "Otis",
}},
`name_mapping:<key:-1 value:"Negatory" > ` +
`name_mapping:<key:7 value:"Lucky" > ` +
`name_mapping:<key:1234 value:"Feist" > ` +
`name_mapping:<key:6345789 value:"Otis" >`,
},
// map with nil value; not well-defined, but we shouldn't crash
{
&pb.MessageWithMap{MsgMapping: map[int64]*pb.FloatingPoint{7: nil}},
`msg_mapping:<key:7 >`,
},
}
for _, test := range tests {
got := strings.TrimSpace(test.m.String())
if got != test.want {
t.Errorf("\n got %s\nwant %s", got, test.want)
}
}
}

View File

@ -0,0 +1,33 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
test:
cd testdata && make test

View File

@ -0,0 +1,37 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Not stored here, but descriptor.proto is in https://github.com/google/protobuf/
# at src/google/protobuf/descriptor.proto
regenerate:
@echo WARNING! THIS RULE IS PROBABLY NOT RIGHT FOR YOUR INSTALLATION
cp $(HOME)/src/protobuf/include/google/protobuf/descriptor.proto .
protoc --go_out=../../../../.. -I$(HOME)/src/protobuf/include $(HOME)/src/protobuf/include/google/protobuf/descriptor.proto

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,849 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Author: kenton@google.com (Kenton Varda)
// Based on original Protocol Buffers design by
// Sanjay Ghemawat, Jeff Dean, and others.
//
// The messages in this file describe the definitions found in .proto files.
// A valid .proto file can be translated directly to a FileDescriptorProto
// without any other information (e.g. without reading its imports).
syntax = "proto2";
package google.protobuf;
option go_package = "github.com/golang/protobuf/protoc-gen-go/descriptor;descriptor";
option java_package = "com.google.protobuf";
option java_outer_classname = "DescriptorProtos";
option csharp_namespace = "Google.Protobuf.Reflection";
option objc_class_prefix = "GPB";
// descriptor.proto must be optimized for speed because reflection-based
// algorithms don't work during bootstrapping.
option optimize_for = SPEED;
// The protocol compiler can output a FileDescriptorSet containing the .proto
// files it parses.
message FileDescriptorSet {
repeated FileDescriptorProto file = 1;
}
// Describes a complete .proto file.
message FileDescriptorProto {
optional string name = 1; // file name, relative to root of source tree
optional string package = 2; // e.g. "foo", "foo.bar", etc.
// Names of files imported by this file.
repeated string dependency = 3;
// Indexes of the public imported files in the dependency list above.
repeated int32 public_dependency = 10;
// Indexes of the weak imported files in the dependency list.
// For Google-internal migration only. Do not use.
repeated int32 weak_dependency = 11;
// All top-level definitions in this file.
repeated DescriptorProto message_type = 4;
repeated EnumDescriptorProto enum_type = 5;
repeated ServiceDescriptorProto service = 6;
repeated FieldDescriptorProto extension = 7;
optional FileOptions options = 8;
// This field contains optional information about the original source code.
// You may safely remove this entire field without harming runtime
// functionality of the descriptors -- the information is needed only by
// development tools.
optional SourceCodeInfo source_code_info = 9;
// The syntax of the proto file.
// The supported values are "proto2" and "proto3".
optional string syntax = 12;
}
// Describes a message type.
message DescriptorProto {
optional string name = 1;
repeated FieldDescriptorProto field = 2;
repeated FieldDescriptorProto extension = 6;
repeated DescriptorProto nested_type = 3;
repeated EnumDescriptorProto enum_type = 4;
message ExtensionRange {
optional int32 start = 1;
optional int32 end = 2;
optional ExtensionRangeOptions options = 3;
}
repeated ExtensionRange extension_range = 5;
repeated OneofDescriptorProto oneof_decl = 8;
optional MessageOptions options = 7;
// Range of reserved tag numbers. Reserved tag numbers may not be used by
// fields or extension ranges in the same message. Reserved ranges may
// not overlap.
message ReservedRange {
optional int32 start = 1; // Inclusive.
optional int32 end = 2; // Exclusive.
}
repeated ReservedRange reserved_range = 9;
// Reserved field names, which may not be used by fields in the same message.
// A given name may only be reserved once.
repeated string reserved_name = 10;
}
message ExtensionRangeOptions {
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
}
// Describes a field within a message.
message FieldDescriptorProto {
enum Type {
// 0 is reserved for errors.
// Order is weird for historical reasons.
TYPE_DOUBLE = 1;
TYPE_FLOAT = 2;
// Not ZigZag encoded. Negative numbers take 10 bytes. Use TYPE_SINT64 if
// negative values are likely.
TYPE_INT64 = 3;
TYPE_UINT64 = 4;
// Not ZigZag encoded. Negative numbers take 10 bytes. Use TYPE_SINT32 if
// negative values are likely.
TYPE_INT32 = 5;
TYPE_FIXED64 = 6;
TYPE_FIXED32 = 7;
TYPE_BOOL = 8;
TYPE_STRING = 9;
// Tag-delimited aggregate.
// Group type is deprecated and not supported in proto3. However, Proto3
// implementations should still be able to parse the group wire format and
// treat group fields as unknown fields.
TYPE_GROUP = 10;
TYPE_MESSAGE = 11; // Length-delimited aggregate.
// New in version 2.
TYPE_BYTES = 12;
TYPE_UINT32 = 13;
TYPE_ENUM = 14;
TYPE_SFIXED32 = 15;
TYPE_SFIXED64 = 16;
TYPE_SINT32 = 17; // Uses ZigZag encoding.
TYPE_SINT64 = 18; // Uses ZigZag encoding.
};
enum Label {
// 0 is reserved for errors
LABEL_OPTIONAL = 1;
LABEL_REQUIRED = 2;
LABEL_REPEATED = 3;
};
optional string name = 1;
optional int32 number = 3;
optional Label label = 4;
// If type_name is set, this need not be set. If both this and type_name
// are set, this must be one of TYPE_ENUM, TYPE_MESSAGE or TYPE_GROUP.
optional Type type = 5;
// For message and enum types, this is the name of the type. If the name
// starts with a '.', it is fully-qualified. Otherwise, C++-like scoping
// rules are used to find the type (i.e. first the nested types within this
// message are searched, then within the parent, on up to the root
// namespace).
optional string type_name = 6;
// For extensions, this is the name of the type being extended. It is
// resolved in the same manner as type_name.
optional string extendee = 2;
// For numeric types, contains the original text representation of the value.
// For booleans, "true" or "false".
// For strings, contains the default text contents (not escaped in any way).
// For bytes, contains the C escaped value. All bytes >= 128 are escaped.
// TODO(kenton): Base-64 encode?
optional string default_value = 7;
// If set, gives the index of a oneof in the containing type's oneof_decl
// list. This field is a member of that oneof.
optional int32 oneof_index = 9;
// JSON name of this field. The value is set by protocol compiler. If the
// user has set a "json_name" option on this field, that option's value
// will be used. Otherwise, it's deduced from the field's name by converting
// it to camelCase.
optional string json_name = 10;
optional FieldOptions options = 8;
}
// Describes a oneof.
message OneofDescriptorProto {
optional string name = 1;
optional OneofOptions options = 2;
}
// Describes an enum type.
message EnumDescriptorProto {
optional string name = 1;
repeated EnumValueDescriptorProto value = 2;
optional EnumOptions options = 3;
}
// Describes a value within an enum.
message EnumValueDescriptorProto {
optional string name = 1;
optional int32 number = 2;
optional EnumValueOptions options = 3;
}
// Describes a service.
message ServiceDescriptorProto {
optional string name = 1;
repeated MethodDescriptorProto method = 2;
optional ServiceOptions options = 3;
}
// Describes a method of a service.
message MethodDescriptorProto {
optional string name = 1;
// Input and output type names. These are resolved in the same way as
// FieldDescriptorProto.type_name, but must refer to a message type.
optional string input_type = 2;
optional string output_type = 3;
optional MethodOptions options = 4;
// Identifies if client streams multiple client messages
optional bool client_streaming = 5 [default=false];
// Identifies if server streams multiple server messages
optional bool server_streaming = 6 [default=false];
}
// ===================================================================
// Options
// Each of the definitions above may have "options" attached. These are
// just annotations which may cause code to be generated slightly differently
// or may contain hints for code that manipulates protocol messages.
//
// Clients may define custom options as extensions of the *Options messages.
// These extensions may not yet be known at parsing time, so the parser cannot
// store the values in them. Instead it stores them in a field in the *Options
// message called uninterpreted_option. This field must have the same name
// across all *Options messages. We then use this field to populate the
// extensions when we build a descriptor, at which point all protos have been
// parsed and so all extensions are known.
//
// Extension numbers for custom options may be chosen as follows:
// * For options which will only be used within a single application or
// organization, or for experimental options, use field numbers 50000
// through 99999. It is up to you to ensure that you do not use the
// same number for multiple options.
// * For options which will be published and used publicly by multiple
// independent entities, e-mail protobuf-global-extension-registry@google.com
// to reserve extension numbers. Simply provide your project name (e.g.
// Objective-C plugin) and your project website (if available) -- there's no
// need to explain how you intend to use them. Usually you only need one
// extension number. You can declare multiple options with only one extension
// number by putting them in a sub-message. See the Custom Options section of
// the docs for examples:
// https://developers.google.com/protocol-buffers/docs/proto#options
// If this turns out to be popular, a web service will be set up
// to automatically assign option numbers.
message FileOptions {
// Sets the Java package where classes generated from this .proto will be
// placed. By default, the proto package is used, but this is often
// inappropriate because proto packages do not normally start with backwards
// domain names.
optional string java_package = 1;
// If set, all the classes from the .proto file are wrapped in a single
// outer class with the given name. This applies to both Proto1
// (equivalent to the old "--one_java_file" option) and Proto2 (where
// a .proto always translates to a single class, but you may want to
// explicitly choose the class name).
optional string java_outer_classname = 8;
// If set true, then the Java code generator will generate a separate .java
// file for each top-level message, enum, and service defined in the .proto
// file. Thus, these types will *not* be nested inside the outer class
// named by java_outer_classname. However, the outer class will still be
// generated to contain the file's getDescriptor() method as well as any
// top-level extensions defined in the file.
optional bool java_multiple_files = 10 [default=false];
// This option does nothing.
optional bool java_generate_equals_and_hash = 20 [deprecated=true];
// If set true, then the Java2 code generator will generate code that
// throws an exception whenever an attempt is made to assign a non-UTF-8
// byte sequence to a string field.
// Message reflection will do the same.
// However, an extension field still accepts non-UTF-8 byte sequences.
// This option has no effect on when used with the lite runtime.
optional bool java_string_check_utf8 = 27 [default=false];
// Generated classes can be optimized for speed or code size.
enum OptimizeMode {
SPEED = 1; // Generate complete code for parsing, serialization,
// etc.
CODE_SIZE = 2; // Use ReflectionOps to implement these methods.
LITE_RUNTIME = 3; // Generate code using MessageLite and the lite runtime.
}
optional OptimizeMode optimize_for = 9 [default=SPEED];
// Sets the Go package where structs generated from this .proto will be
// placed. If omitted, the Go package will be derived from the following:
// - The basename of the package import path, if provided.
// - Otherwise, the package statement in the .proto file, if present.
// - Otherwise, the basename of the .proto file, without extension.
optional string go_package = 11;
// Should generic services be generated in each language? "Generic" services
// are not specific to any particular RPC system. They are generated by the
// main code generators in each language (without additional plugins).
// Generic services were the only kind of service generation supported by
// early versions of google.protobuf.
//
// Generic services are now considered deprecated in favor of using plugins
// that generate code specific to your particular RPC system. Therefore,
// these default to false. Old code which depends on generic services should
// explicitly set them to true.
optional bool cc_generic_services = 16 [default=false];
optional bool java_generic_services = 17 [default=false];
optional bool py_generic_services = 18 [default=false];
optional bool php_generic_services = 42 [default=false];
// Is this file deprecated?
// Depending on the target platform, this can emit Deprecated annotations
// for everything in the file, or it will be completely ignored; in the very
// least, this is a formalization for deprecating files.
optional bool deprecated = 23 [default=false];
// Enables the use of arenas for the proto messages in this file. This applies
// only to generated classes for C++.
optional bool cc_enable_arenas = 31 [default=false];
// Sets the objective c class prefix which is prepended to all objective c
// generated classes from this .proto. There is no default.
optional string objc_class_prefix = 36;
// Namespace for generated classes; defaults to the package.
optional string csharp_namespace = 37;
// By default Swift generators will take the proto package and CamelCase it
// replacing '.' with underscore and use that to prefix the types/symbols
// defined. When this options is provided, they will use this value instead
// to prefix the types/symbols defined.
optional string swift_prefix = 39;
// Sets the php class prefix which is prepended to all php generated classes
// from this .proto. Default is empty.
optional string php_class_prefix = 40;
// Use this option to change the namespace of php generated classes. Default
// is empty. When this option is empty, the package name will be used for
// determining the namespace.
optional string php_namespace = 41;
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
reserved 38;
}
message MessageOptions {
// Set true to use the old proto1 MessageSet wire format for extensions.
// This is provided for backwards-compatibility with the MessageSet wire
// format. You should not use this for any other reason: It's less
// efficient, has fewer features, and is more complicated.
//
// The message must be defined exactly as follows:
// message Foo {
// option message_set_wire_format = true;
// extensions 4 to max;
// }
// Note that the message cannot have any defined fields; MessageSets only
// have extensions.
//
// All extensions of your type must be singular messages; e.g. they cannot
// be int32s, enums, or repeated messages.
//
// Because this is an option, the above two restrictions are not enforced by
// the protocol compiler.
optional bool message_set_wire_format = 1 [default=false];
// Disables the generation of the standard "descriptor()" accessor, which can
// conflict with a field of the same name. This is meant to make migration
// from proto1 easier; new code should avoid fields named "descriptor".
optional bool no_standard_descriptor_accessor = 2 [default=false];
// Is this message deprecated?
// Depending on the target platform, this can emit Deprecated annotations
// for the message, or it will be completely ignored; in the very least,
// this is a formalization for deprecating messages.
optional bool deprecated = 3 [default=false];
// Whether the message is an automatically generated map entry type for the
// maps field.
//
// For maps fields:
// map<KeyType, ValueType> map_field = 1;
// The parsed descriptor looks like:
// message MapFieldEntry {
// option map_entry = true;
// optional KeyType key = 1;
// optional ValueType value = 2;
// }
// repeated MapFieldEntry map_field = 1;
//
// Implementations may choose not to generate the map_entry=true message, but
// use a native map in the target language to hold the keys and values.
// The reflection APIs in such implementions still need to work as
// if the field is a repeated message field.
//
// NOTE: Do not set the option in .proto files. Always use the maps syntax
// instead. The option should only be implicitly set by the proto compiler
// parser.
optional bool map_entry = 7;
reserved 8; // javalite_serializable
reserved 9; // javanano_as_lite
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
}
message FieldOptions {
// The ctype option instructs the C++ code generator to use a different
// representation of the field than it normally would. See the specific
// options below. This option is not yet implemented in the open source
// release -- sorry, we'll try to include it in a future version!
optional CType ctype = 1 [default = STRING];
enum CType {
// Default mode.
STRING = 0;
CORD = 1;
STRING_PIECE = 2;
}
// The packed option can be enabled for repeated primitive fields to enable
// a more efficient representation on the wire. Rather than repeatedly
// writing the tag and type for each element, the entire array is encoded as
// a single length-delimited blob. In proto3, only explicit setting it to
// false will avoid using packed encoding.
optional bool packed = 2;
// The jstype option determines the JavaScript type used for values of the
// field. The option is permitted only for 64 bit integral and fixed types
// (int64, uint64, sint64, fixed64, sfixed64). A field with jstype JS_STRING
// is represented as JavaScript string, which avoids loss of precision that
// can happen when a large value is converted to a floating point JavaScript.
// Specifying JS_NUMBER for the jstype causes the generated JavaScript code to
// use the JavaScript "number" type. The behavior of the default option
// JS_NORMAL is implementation dependent.
//
// This option is an enum to permit additional types to be added, e.g.
// goog.math.Integer.
optional JSType jstype = 6 [default = JS_NORMAL];
enum JSType {
// Use the default type.
JS_NORMAL = 0;
// Use JavaScript strings.
JS_STRING = 1;
// Use JavaScript numbers.
JS_NUMBER = 2;
}
// Should this field be parsed lazily? Lazy applies only to message-type
// fields. It means that when the outer message is initially parsed, the
// inner message's contents will not be parsed but instead stored in encoded
// form. The inner message will actually be parsed when it is first accessed.
//
// This is only a hint. Implementations are free to choose whether to use
// eager or lazy parsing regardless of the value of this option. However,
// setting this option true suggests that the protocol author believes that
// using lazy parsing on this field is worth the additional bookkeeping
// overhead typically needed to implement it.
//
// This option does not affect the public interface of any generated code;
// all method signatures remain the same. Furthermore, thread-safety of the
// interface is not affected by this option; const methods remain safe to
// call from multiple threads concurrently, while non-const methods continue
// to require exclusive access.
//
//
// Note that implementations may choose not to check required fields within
// a lazy sub-message. That is, calling IsInitialized() on the outer message
// may return true even if the inner message has missing required fields.
// This is necessary because otherwise the inner message would have to be
// parsed in order to perform the check, defeating the purpose of lazy
// parsing. An implementation which chooses not to check required fields
// must be consistent about it. That is, for any particular sub-message, the
// implementation must either *always* check its required fields, or *never*
// check its required fields, regardless of whether or not the message has
// been parsed.
optional bool lazy = 5 [default=false];
// Is this field deprecated?
// Depending on the target platform, this can emit Deprecated annotations
// for accessors, or it will be completely ignored; in the very least, this
// is a formalization for deprecating fields.
optional bool deprecated = 3 [default=false];
// For Google-internal migration only. Do not use.
optional bool weak = 10 [default=false];
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
reserved 4; // removed jtype
}
message OneofOptions {
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
}
message EnumOptions {
// Set this option to true to allow mapping different tag names to the same
// value.
optional bool allow_alias = 2;
// Is this enum deprecated?
// Depending on the target platform, this can emit Deprecated annotations
// for the enum, or it will be completely ignored; in the very least, this
// is a formalization for deprecating enums.
optional bool deprecated = 3 [default=false];
reserved 5; // javanano_as_lite
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
}
message EnumValueOptions {
// Is this enum value deprecated?
// Depending on the target platform, this can emit Deprecated annotations
// for the enum value, or it will be completely ignored; in the very least,
// this is a formalization for deprecating enum values.
optional bool deprecated = 1 [default=false];
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
}
message ServiceOptions {
// Note: Field numbers 1 through 32 are reserved for Google's internal RPC
// framework. We apologize for hoarding these numbers to ourselves, but
// we were already using them long before we decided to release Protocol
// Buffers.
// Is this service deprecated?
// Depending on the target platform, this can emit Deprecated annotations
// for the service, or it will be completely ignored; in the very least,
// this is a formalization for deprecating services.
optional bool deprecated = 33 [default=false];
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
}
message MethodOptions {
// Note: Field numbers 1 through 32 are reserved for Google's internal RPC
// framework. We apologize for hoarding these numbers to ourselves, but
// we were already using them long before we decided to release Protocol
// Buffers.
// Is this method deprecated?
// Depending on the target platform, this can emit Deprecated annotations
// for the method, or it will be completely ignored; in the very least,
// this is a formalization for deprecating methods.
optional bool deprecated = 33 [default=false];
// Is this method side-effect-free (or safe in HTTP parlance), or idempotent,
// or neither? HTTP based RPC implementation may choose GET verb for safe
// methods, and PUT verb for idempotent methods instead of the default POST.
enum IdempotencyLevel {
IDEMPOTENCY_UNKNOWN = 0;
NO_SIDE_EFFECTS = 1; // implies idempotent
IDEMPOTENT = 2; // idempotent, but may have side effects
}
optional IdempotencyLevel idempotency_level =
34 [default=IDEMPOTENCY_UNKNOWN];
// The parser stores options it doesn't recognize here. See above.
repeated UninterpretedOption uninterpreted_option = 999;
// Clients can define custom options in extensions of this message. See above.
extensions 1000 to max;
}
// A message representing a option the parser does not recognize. This only
// appears in options protos created by the compiler::Parser class.
// DescriptorPool resolves these when building Descriptor objects. Therefore,
// options protos in descriptor objects (e.g. returned by Descriptor::options(),
// or produced by Descriptor::CopyTo()) will never have UninterpretedOptions
// in them.
message UninterpretedOption {
// The name of the uninterpreted option. Each string represents a segment in
// a dot-separated name. is_extension is true iff a segment represents an
// extension (denoted with parentheses in options specs in .proto files).
// E.g.,{ ["foo", false], ["bar.baz", true], ["qux", false] } represents
// "foo.(bar.baz).qux".
message NamePart {
required string name_part = 1;
required bool is_extension = 2;
}
repeated NamePart name = 2;
// The value of the uninterpreted option, in whatever type the tokenizer
// identified it as during parsing. Exactly one of these should be set.
optional string identifier_value = 3;
optional uint64 positive_int_value = 4;
optional int64 negative_int_value = 5;
optional double double_value = 6;
optional bytes string_value = 7;
optional string aggregate_value = 8;
}
// ===================================================================
// Optional source code info
// Encapsulates information about the original source file from which a
// FileDescriptorProto was generated.
message SourceCodeInfo {
// A Location identifies a piece of source code in a .proto file which
// corresponds to a particular definition. This information is intended
// to be useful to IDEs, code indexers, documentation generators, and similar
// tools.
//
// For example, say we have a file like:
// message Foo {
// optional string foo = 1;
// }
// Let's look at just the field definition:
// optional string foo = 1;
// ^ ^^ ^^ ^ ^^^
// a bc de f ghi
// We have the following locations:
// span path represents
// [a,i) [ 4, 0, 2, 0 ] The whole field definition.
// [a,b) [ 4, 0, 2, 0, 4 ] The label (optional).
// [c,d) [ 4, 0, 2, 0, 5 ] The type (string).
// [e,f) [ 4, 0, 2, 0, 1 ] The name (foo).
// [g,h) [ 4, 0, 2, 0, 3 ] The number (1).
//
// Notes:
// - A location may refer to a repeated field itself (i.e. not to any
// particular index within it). This is used whenever a set of elements are
// logically enclosed in a single code segment. For example, an entire
// extend block (possibly containing multiple extension definitions) will
// have an outer location whose path refers to the "extensions" repeated
// field without an index.
// - Multiple locations may have the same path. This happens when a single
// logical declaration is spread out across multiple places. The most
// obvious example is the "extend" block again -- there may be multiple
// extend blocks in the same scope, each of which will have the same path.
// - A location's span is not always a subset of its parent's span. For
// example, the "extendee" of an extension declaration appears at the
// beginning of the "extend" block and is shared by all extensions within
// the block.
// - Just because a location's span is a subset of some other location's span
// does not mean that it is a descendent. For example, a "group" defines
// both a type and a field in a single declaration. Thus, the locations
// corresponding to the type and field and their components will overlap.
// - Code which tries to interpret locations should probably be designed to
// ignore those that it doesn't understand, as more types of locations could
// be recorded in the future.
repeated Location location = 1;
message Location {
// Identifies which part of the FileDescriptorProto was defined at this
// location.
//
// Each element is a field number or an index. They form a path from
// the root FileDescriptorProto to the place where the definition. For
// example, this path:
// [ 4, 3, 2, 7, 1 ]
// refers to:
// file.message_type(3) // 4, 3
// .field(7) // 2, 7
// .name() // 1
// This is because FileDescriptorProto.message_type has field number 4:
// repeated DescriptorProto message_type = 4;
// and DescriptorProto.field has field number 2:
// repeated FieldDescriptorProto field = 2;
// and FieldDescriptorProto.name has field number 1:
// optional string name = 1;
//
// Thus, the above path gives the location of a field name. If we removed
// the last element:
// [ 4, 3, 2, 7 ]
// this path refers to the whole field declaration (from the beginning
// of the label to the terminating semicolon).
repeated int32 path = 1 [packed=true];
// Always has exactly three or four elements: start line, start column,
// end line (optional, otherwise assumed same as start line), end column.
// These are packed into a single field for efficiency. Note that line
// and column numbers are zero-based -- typically you will want to add
// 1 to each before displaying to a user.
repeated int32 span = 2 [packed=true];
// If this SourceCodeInfo represents a complete declaration, these are any
// comments appearing before and after the declaration which appear to be
// attached to the declaration.
//
// A series of line comments appearing on consecutive lines, with no other
// tokens appearing on those lines, will be treated as a single comment.
//
// leading_detached_comments will keep paragraphs of comments that appear
// before (but not connected to) the current element. Each paragraph,
// separated by empty lines, will be one comment element in the repeated
// field.
//
// Only the comment content is provided; comment markers (e.g. //) are
// stripped out. For block comments, leading whitespace and an asterisk
// will be stripped from the beginning of each line other than the first.
// Newlines are included in the output.
//
// Examples:
//
// optional int32 foo = 1; // Comment attached to foo.
// // Comment attached to bar.
// optional int32 bar = 2;
//
// optional string baz = 3;
// // Comment attached to baz.
// // Another line attached to baz.
//
// // Comment attached to qux.
// //
// // Another line attached to qux.
// optional double qux = 4;
//
// // Detached comment for corge. This is not leading or trailing comments
// // to qux or corge because there are blank lines separating it from
// // both.
//
// // Detached comment for corge paragraph 2.
//
// optional string corge = 5;
// /* Block comment attached
// * to corge. Leading asterisks
// * will be removed. */
// /* Block comment attached to
// * grault. */
// optional int32 grault = 6;
//
// // ignored detached comments.
optional string leading_comments = 3;
optional string trailing_comments = 4;
repeated string leading_detached_comments = 6;
}
}
// Describes the relationship between generated code and its original source
// file. A GeneratedCodeInfo message is associated with only one generated
// source file, but may contain references to different source .proto files.
message GeneratedCodeInfo {
// An Annotation connects some span of text in generated code to an element
// of its generating .proto file.
repeated Annotation annotation = 1;
message Annotation {
// Identifies the element in the original source .proto file. This field
// is formatted the same as SourceCodeInfo.Location.path.
repeated int32 path = 1 [packed=true];
// Identifies the filesystem path to the original source .proto.
optional string source_file = 2;
// Identifies the starting offset in bytes in the generated code
// that relates to the identified object.
optional int32 begin = 3;
// Identifies the ending offset in bytes in the generated code that
// relates to the identified offset. The end offset should be one past
// the last relevant byte (so the length of the text = end - begin).
optional int32 end = 4;
}
}

51
vendor/github.com/golang/protobuf/protoc-gen-go/doc.go generated vendored Normal file
View File

@ -0,0 +1,51 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
/*
A plugin for the Google protocol buffer compiler to generate Go code.
Run it by building this program and putting it in your path with the name
protoc-gen-go
That word 'go' at the end becomes part of the option string set for the
protocol compiler, so once the protocol compiler (protoc) is installed
you can run
protoc --go_out=output_directory input_directory/file.proto
to generate Go bindings for the protocol defined by file.proto.
With that input, the output will be written to
output_directory/file.pb.go
The generated code is documented in the package comment for
the library.
See the README and documentation for protocol buffers to learn more:
https://developers.google.com/protocol-buffers/
*/
package documentation

View File

@ -0,0 +1,40 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
include $(GOROOT)/src/Make.inc
TARG=github.com/golang/protobuf/compiler/generator
GOFILES=\
generator.go\
DEPS=../descriptor ../plugin ../../proto
include $(GOROOT)/src/Make.pkg

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,114 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2013 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package generator
import (
"testing"
"github.com/golang/protobuf/protoc-gen-go/descriptor"
)
func TestCamelCase(t *testing.T) {
tests := []struct {
in, want string
}{
{"one", "One"},
{"one_two", "OneTwo"},
{"_my_field_name_2", "XMyFieldName_2"},
{"Something_Capped", "Something_Capped"},
{"my_Name", "My_Name"},
{"OneTwo", "OneTwo"},
{"_", "X"},
{"_a_", "XA_"},
}
for _, tc := range tests {
if got := CamelCase(tc.in); got != tc.want {
t.Errorf("CamelCase(%q) = %q, want %q", tc.in, got, tc.want)
}
}
}
func TestGoPackageOption(t *testing.T) {
tests := []struct {
in string
impPath, pkg string
ok bool
}{
{"", "", "", false},
{"foo", "", "foo", true},
{"github.com/golang/bar", "github.com/golang/bar", "bar", true},
{"github.com/golang/bar;baz", "github.com/golang/bar", "baz", true},
}
for _, tc := range tests {
d := &FileDescriptor{
FileDescriptorProto: &descriptor.FileDescriptorProto{
Options: &descriptor.FileOptions{
GoPackage: &tc.in,
},
},
}
impPath, pkg, ok := d.goPackageOption()
if impPath != tc.impPath || pkg != tc.pkg || ok != tc.ok {
t.Errorf("go_package = %q => (%q, %q, %t), want (%q, %q, %t)", tc.in,
impPath, pkg, ok, tc.impPath, tc.pkg, tc.ok)
}
}
}
func TestUnescape(t *testing.T) {
tests := []struct {
in string
out string
}{
// successful cases, including all kinds of escapes
{"", ""},
{"foo bar baz frob nitz", "foo bar baz frob nitz"},
{`\000\001\002\003\004\005\006\007`, string([]byte{0, 1, 2, 3, 4, 5, 6, 7})},
{`\a\b\f\n\r\t\v\\\?\'\"`, string([]byte{'\a', '\b', '\f', '\n', '\r', '\t', '\v', '\\', '?', '\'', '"'})},
{`\x10\x20\x30\x40\x50\x60\x70\x80`, string([]byte{16, 32, 48, 64, 80, 96, 112, 128})},
// variable length octal escapes
{`\0\018\222\377\3\04\005\6\07`, string([]byte{0, 1, '8', 0222, 255, 3, 4, 5, 6, 7})},
// malformed escape sequences left as is
{"foo \\g bar", "foo \\g bar"},
{"foo \\xg0 bar", "foo \\xg0 bar"},
{"\\", "\\"},
{"\\x", "\\x"},
{"\\xf", "\\xf"},
{"\\777", "\\777"}, // overflows byte
}
for _, tc := range tests {
s := unescape(tc.in)
if s != tc.out {
t.Errorf("doUnescape(%q) = %q; should have been %q", tc.in, s, tc.out)
}
}
}

View File

@ -0,0 +1,463 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Package grpc outputs gRPC service descriptions in Go code.
// It runs as a plugin for the Go protocol buffer compiler plugin.
// It is linked in to protoc-gen-go.
package grpc
import (
"fmt"
"path"
"strconv"
"strings"
pb "github.com/golang/protobuf/protoc-gen-go/descriptor"
"github.com/golang/protobuf/protoc-gen-go/generator"
)
// generatedCodeVersion indicates a version of the generated code.
// It is incremented whenever an incompatibility between the generated code and
// the grpc package is introduced; the generated code references
// a constant, grpc.SupportPackageIsVersionN (where N is generatedCodeVersion).
const generatedCodeVersion = 4
// Paths for packages used by code generated in this file,
// relative to the import_prefix of the generator.Generator.
const (
contextPkgPath = "golang.org/x/net/context"
grpcPkgPath = "google.golang.org/grpc"
)
func init() {
generator.RegisterPlugin(new(grpc))
}
// grpc is an implementation of the Go protocol buffer compiler's
// plugin architecture. It generates bindings for gRPC support.
type grpc struct {
gen *generator.Generator
}
// Name returns the name of this plugin, "grpc".
func (g *grpc) Name() string {
return "grpc"
}
// The names for packages imported in the generated code.
// They may vary from the final path component of the import path
// if the name is used by other packages.
var (
contextPkg string
grpcPkg string
)
// Init initializes the plugin.
func (g *grpc) Init(gen *generator.Generator) {
g.gen = gen
contextPkg = generator.RegisterUniquePackageName("context", nil)
grpcPkg = generator.RegisterUniquePackageName("grpc", nil)
}
// Given a type name defined in a .proto, return its object.
// Also record that we're using it, to guarantee the associated import.
func (g *grpc) objectNamed(name string) generator.Object {
g.gen.RecordTypeUse(name)
return g.gen.ObjectNamed(name)
}
// Given a type name defined in a .proto, return its name as we will print it.
func (g *grpc) typeName(str string) string {
return g.gen.TypeName(g.objectNamed(str))
}
// P forwards to g.gen.P.
func (g *grpc) P(args ...interface{}) { g.gen.P(args...) }
// Generate generates code for the services in the given file.
func (g *grpc) Generate(file *generator.FileDescriptor) {
if len(file.FileDescriptorProto.Service) == 0 {
return
}
g.P("// Reference imports to suppress errors if they are not otherwise used.")
g.P("var _ ", contextPkg, ".Context")
g.P("var _ ", grpcPkg, ".ClientConn")
g.P()
// Assert version compatibility.
g.P("// This is a compile-time assertion to ensure that this generated file")
g.P("// is compatible with the grpc package it is being compiled against.")
g.P("const _ = ", grpcPkg, ".SupportPackageIsVersion", generatedCodeVersion)
g.P()
for i, service := range file.FileDescriptorProto.Service {
g.generateService(file, service, i)
}
}
// GenerateImports generates the import declaration for this file.
func (g *grpc) GenerateImports(file *generator.FileDescriptor) {
if len(file.FileDescriptorProto.Service) == 0 {
return
}
g.P("import (")
g.P(contextPkg, " ", strconv.Quote(path.Join(g.gen.ImportPrefix, contextPkgPath)))
g.P(grpcPkg, " ", strconv.Quote(path.Join(g.gen.ImportPrefix, grpcPkgPath)))
g.P(")")
g.P()
}
// reservedClientName records whether a client name is reserved on the client side.
var reservedClientName = map[string]bool{
// TODO: do we need any in gRPC?
}
func unexport(s string) string { return strings.ToLower(s[:1]) + s[1:] }
// generateService generates all the code for the named service.
func (g *grpc) generateService(file *generator.FileDescriptor, service *pb.ServiceDescriptorProto, index int) {
path := fmt.Sprintf("6,%d", index) // 6 means service.
origServName := service.GetName()
fullServName := origServName
if pkg := file.GetPackage(); pkg != "" {
fullServName = pkg + "." + fullServName
}
servName := generator.CamelCase(origServName)
g.P()
g.P("// Client API for ", servName, " service")
g.P()
// Client interface.
g.P("type ", servName, "Client interface {")
for i, method := range service.Method {
g.gen.PrintComments(fmt.Sprintf("%s,2,%d", path, i)) // 2 means method in a service.
g.P(g.generateClientSignature(servName, method))
}
g.P("}")
g.P()
// Client structure.
g.P("type ", unexport(servName), "Client struct {")
g.P("cc *", grpcPkg, ".ClientConn")
g.P("}")
g.P()
// NewClient factory.
g.P("func New", servName, "Client (cc *", grpcPkg, ".ClientConn) ", servName, "Client {")
g.P("return &", unexport(servName), "Client{cc}")
g.P("}")
g.P()
var methodIndex, streamIndex int
serviceDescVar := "_" + servName + "_serviceDesc"
// Client method implementations.
for _, method := range service.Method {
var descExpr string
if !method.GetServerStreaming() && !method.GetClientStreaming() {
// Unary RPC method
descExpr = fmt.Sprintf("&%s.Methods[%d]", serviceDescVar, methodIndex)
methodIndex++
} else {
// Streaming RPC method
descExpr = fmt.Sprintf("&%s.Streams[%d]", serviceDescVar, streamIndex)
streamIndex++
}
g.generateClientMethod(servName, fullServName, serviceDescVar, method, descExpr)
}
g.P("// Server API for ", servName, " service")
g.P()
// Server interface.
serverType := servName + "Server"
g.P("type ", serverType, " interface {")
for i, method := range service.Method {
g.gen.PrintComments(fmt.Sprintf("%s,2,%d", path, i)) // 2 means method in a service.
g.P(g.generateServerSignature(servName, method))
}
g.P("}")
g.P()
// Server registration.
g.P("func Register", servName, "Server(s *", grpcPkg, ".Server, srv ", serverType, ") {")
g.P("s.RegisterService(&", serviceDescVar, `, srv)`)
g.P("}")
g.P()
// Server handler implementations.
var handlerNames []string
for _, method := range service.Method {
hname := g.generateServerMethod(servName, fullServName, method)
handlerNames = append(handlerNames, hname)
}
// Service descriptor.
g.P("var ", serviceDescVar, " = ", grpcPkg, ".ServiceDesc {")
g.P("ServiceName: ", strconv.Quote(fullServName), ",")
g.P("HandlerType: (*", serverType, ")(nil),")
g.P("Methods: []", grpcPkg, ".MethodDesc{")
for i, method := range service.Method {
if method.GetServerStreaming() || method.GetClientStreaming() {
continue
}
g.P("{")
g.P("MethodName: ", strconv.Quote(method.GetName()), ",")
g.P("Handler: ", handlerNames[i], ",")
g.P("},")
}
g.P("},")
g.P("Streams: []", grpcPkg, ".StreamDesc{")
for i, method := range service.Method {
if !method.GetServerStreaming() && !method.GetClientStreaming() {
continue
}
g.P("{")
g.P("StreamName: ", strconv.Quote(method.GetName()), ",")
g.P("Handler: ", handlerNames[i], ",")
if method.GetServerStreaming() {
g.P("ServerStreams: true,")
}
if method.GetClientStreaming() {
g.P("ClientStreams: true,")
}
g.P("},")
}
g.P("},")
g.P("Metadata: \"", file.GetName(), "\",")
g.P("}")
g.P()
}
// generateClientSignature returns the client-side signature for a method.
func (g *grpc) generateClientSignature(servName string, method *pb.MethodDescriptorProto) string {
origMethName := method.GetName()
methName := generator.CamelCase(origMethName)
if reservedClientName[methName] {
methName += "_"
}
reqArg := ", in *" + g.typeName(method.GetInputType())
if method.GetClientStreaming() {
reqArg = ""
}
respName := "*" + g.typeName(method.GetOutputType())
if method.GetServerStreaming() || method.GetClientStreaming() {
respName = servName + "_" + generator.CamelCase(origMethName) + "Client"
}
return fmt.Sprintf("%s(ctx %s.Context%s, opts ...%s.CallOption) (%s, error)", methName, contextPkg, reqArg, grpcPkg, respName)
}
func (g *grpc) generateClientMethod(servName, fullServName, serviceDescVar string, method *pb.MethodDescriptorProto, descExpr string) {
sname := fmt.Sprintf("/%s/%s", fullServName, method.GetName())
methName := generator.CamelCase(method.GetName())
inType := g.typeName(method.GetInputType())
outType := g.typeName(method.GetOutputType())
g.P("func (c *", unexport(servName), "Client) ", g.generateClientSignature(servName, method), "{")
if !method.GetServerStreaming() && !method.GetClientStreaming() {
g.P("out := new(", outType, ")")
// TODO: Pass descExpr to Invoke.
g.P("err := ", grpcPkg, `.Invoke(ctx, "`, sname, `", in, out, c.cc, opts...)`)
g.P("if err != nil { return nil, err }")
g.P("return out, nil")
g.P("}")
g.P()
return
}
streamType := unexport(servName) + methName + "Client"
g.P("stream, err := ", grpcPkg, ".NewClientStream(ctx, ", descExpr, `, c.cc, "`, sname, `", opts...)`)
g.P("if err != nil { return nil, err }")
g.P("x := &", streamType, "{stream}")
if !method.GetClientStreaming() {
g.P("if err := x.ClientStream.SendMsg(in); err != nil { return nil, err }")
g.P("if err := x.ClientStream.CloseSend(); err != nil { return nil, err }")
}
g.P("return x, nil")
g.P("}")
g.P()
genSend := method.GetClientStreaming()
genRecv := method.GetServerStreaming()
genCloseAndRecv := !method.GetServerStreaming()
// Stream auxiliary types and methods.
g.P("type ", servName, "_", methName, "Client interface {")
if genSend {
g.P("Send(*", inType, ") error")
}
if genRecv {
g.P("Recv() (*", outType, ", error)")
}
if genCloseAndRecv {
g.P("CloseAndRecv() (*", outType, ", error)")
}
g.P(grpcPkg, ".ClientStream")
g.P("}")
g.P()
g.P("type ", streamType, " struct {")
g.P(grpcPkg, ".ClientStream")
g.P("}")
g.P()
if genSend {
g.P("func (x *", streamType, ") Send(m *", inType, ") error {")
g.P("return x.ClientStream.SendMsg(m)")
g.P("}")
g.P()
}
if genRecv {
g.P("func (x *", streamType, ") Recv() (*", outType, ", error) {")
g.P("m := new(", outType, ")")
g.P("if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
if genCloseAndRecv {
g.P("func (x *", streamType, ") CloseAndRecv() (*", outType, ", error) {")
g.P("if err := x.ClientStream.CloseSend(); err != nil { return nil, err }")
g.P("m := new(", outType, ")")
g.P("if err := x.ClientStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
}
// generateServerSignature returns the server-side signature for a method.
func (g *grpc) generateServerSignature(servName string, method *pb.MethodDescriptorProto) string {
origMethName := method.GetName()
methName := generator.CamelCase(origMethName)
if reservedClientName[methName] {
methName += "_"
}
var reqArgs []string
ret := "error"
if !method.GetServerStreaming() && !method.GetClientStreaming() {
reqArgs = append(reqArgs, contextPkg+".Context")
ret = "(*" + g.typeName(method.GetOutputType()) + ", error)"
}
if !method.GetClientStreaming() {
reqArgs = append(reqArgs, "*"+g.typeName(method.GetInputType()))
}
if method.GetServerStreaming() || method.GetClientStreaming() {
reqArgs = append(reqArgs, servName+"_"+generator.CamelCase(origMethName)+"Server")
}
return methName + "(" + strings.Join(reqArgs, ", ") + ") " + ret
}
func (g *grpc) generateServerMethod(servName, fullServName string, method *pb.MethodDescriptorProto) string {
methName := generator.CamelCase(method.GetName())
hname := fmt.Sprintf("_%s_%s_Handler", servName, methName)
inType := g.typeName(method.GetInputType())
outType := g.typeName(method.GetOutputType())
if !method.GetServerStreaming() && !method.GetClientStreaming() {
g.P("func ", hname, "(srv interface{}, ctx ", contextPkg, ".Context, dec func(interface{}) error, interceptor ", grpcPkg, ".UnaryServerInterceptor) (interface{}, error) {")
g.P("in := new(", inType, ")")
g.P("if err := dec(in); err != nil { return nil, err }")
g.P("if interceptor == nil { return srv.(", servName, "Server).", methName, "(ctx, in) }")
g.P("info := &", grpcPkg, ".UnaryServerInfo{")
g.P("Server: srv,")
g.P("FullMethod: ", strconv.Quote(fmt.Sprintf("/%s/%s", fullServName, methName)), ",")
g.P("}")
g.P("handler := func(ctx ", contextPkg, ".Context, req interface{}) (interface{}, error) {")
g.P("return srv.(", servName, "Server).", methName, "(ctx, req.(*", inType, "))")
g.P("}")
g.P("return interceptor(ctx, in, info, handler)")
g.P("}")
g.P()
return hname
}
streamType := unexport(servName) + methName + "Server"
g.P("func ", hname, "(srv interface{}, stream ", grpcPkg, ".ServerStream) error {")
if !method.GetClientStreaming() {
g.P("m := new(", inType, ")")
g.P("if err := stream.RecvMsg(m); err != nil { return err }")
g.P("return srv.(", servName, "Server).", methName, "(m, &", streamType, "{stream})")
} else {
g.P("return srv.(", servName, "Server).", methName, "(&", streamType, "{stream})")
}
g.P("}")
g.P()
genSend := method.GetServerStreaming()
genSendAndClose := !method.GetServerStreaming()
genRecv := method.GetClientStreaming()
// Stream auxiliary types and methods.
g.P("type ", servName, "_", methName, "Server interface {")
if genSend {
g.P("Send(*", outType, ") error")
}
if genSendAndClose {
g.P("SendAndClose(*", outType, ") error")
}
if genRecv {
g.P("Recv() (*", inType, ", error)")
}
g.P(grpcPkg, ".ServerStream")
g.P("}")
g.P()
g.P("type ", streamType, " struct {")
g.P(grpcPkg, ".ServerStream")
g.P("}")
g.P()
if genSend {
g.P("func (x *", streamType, ") Send(m *", outType, ") error {")
g.P("return x.ServerStream.SendMsg(m)")
g.P("}")
g.P()
}
if genSendAndClose {
g.P("func (x *", streamType, ") SendAndClose(m *", outType, ") error {")
g.P("return x.ServerStream.SendMsg(m)")
g.P("}")
g.P()
}
if genRecv {
g.P("func (x *", streamType, ") Recv() (*", inType, ", error) {")
g.P("m := new(", inType, ")")
g.P("if err := x.ServerStream.RecvMsg(m); err != nil { return nil, err }")
g.P("return m, nil")
g.P("}")
g.P()
}
return hname
}

View File

@ -0,0 +1,34 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2015 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import _ "github.com/golang/protobuf/protoc-gen-go/grpc"

View File

@ -0,0 +1,98 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// protoc-gen-go is a plugin for the Google protocol buffer compiler to generate
// Go code. Run it by building this program and putting it in your path with
// the name
// protoc-gen-go
// That word 'go' at the end becomes part of the option string set for the
// protocol compiler, so once the protocol compiler (protoc) is installed
// you can run
// protoc --go_out=output_directory input_directory/file.proto
// to generate Go bindings for the protocol defined by file.proto.
// With that input, the output will be written to
// output_directory/file.pb.go
//
// The generated code is documented in the package comment for
// the library.
//
// See the README and documentation for protocol buffers to learn more:
// https://developers.google.com/protocol-buffers/
package main
import (
"io/ioutil"
"os"
"github.com/golang/protobuf/proto"
"github.com/golang/protobuf/protoc-gen-go/generator"
)
func main() {
// Begin by allocating a generator. The request and response structures are stored there
// so we can do error handling easily - the response structure contains the field to
// report failure.
g := generator.New()
data, err := ioutil.ReadAll(os.Stdin)
if err != nil {
g.Error(err, "reading input")
}
if err := proto.Unmarshal(data, g.Request); err != nil {
g.Error(err, "parsing input proto")
}
if len(g.Request.FileToGenerate) == 0 {
g.Fail("no files to generate")
}
g.CommandLineParameters(g.Request.GetParameter())
// Create a wrapped version of the Descriptors and EnumDescriptors that
// point to the file that defines them.
g.WrapTypes()
g.SetPackageNames()
g.BuildTypeNameMap()
g.GenerateAllFiles()
// Send back the results.
data, err = proto.Marshal(g.Response)
if err != nil {
g.Error(err, "failed to marshal output proto")
}
_, err = os.Stdout.Write(data)
if err != nil {
g.Error(err, "failed to write output proto")
}
}

View File

@ -0,0 +1,45 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Not stored here, but plugin.proto is in https://github.com/google/protobuf/
# at src/google/protobuf/compiler/plugin.proto
# Also we need to fix an import.
regenerate:
@echo WARNING! THIS RULE IS PROBABLY NOT RIGHT FOR YOUR INSTALLATION
cp $(HOME)/src/protobuf/include/google/protobuf/compiler/plugin.proto .
protoc --go_out=Mgoogle/protobuf/descriptor.proto=github.com/golang/protobuf/protoc-gen-go/descriptor:../../../../.. \
-I$(HOME)/src/protobuf/include $(HOME)/src/protobuf/include/google/protobuf/compiler/plugin.proto
restore:
cp plugin.pb.golden plugin.pb.go
preserve:
cp plugin.pb.go plugin.pb.golden

View File

@ -0,0 +1,293 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: google/protobuf/compiler/plugin.proto
/*
Package plugin_go is a generated protocol buffer package.
It is generated from these files:
google/protobuf/compiler/plugin.proto
It has these top-level messages:
Version
CodeGeneratorRequest
CodeGeneratorResponse
*/
package plugin_go
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/protoc-gen-go/descriptor"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// The version number of protocol compiler.
type Version struct {
Major *int32 `protobuf:"varint,1,opt,name=major" json:"major,omitempty"`
Minor *int32 `protobuf:"varint,2,opt,name=minor" json:"minor,omitempty"`
Patch *int32 `protobuf:"varint,3,opt,name=patch" json:"patch,omitempty"`
// A suffix for alpha, beta or rc release, e.g., "alpha-1", "rc2". It should
// be empty for mainline stable releases.
Suffix *string `protobuf:"bytes,4,opt,name=suffix" json:"suffix,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *Version) Reset() { *m = Version{} }
func (m *Version) String() string { return proto.CompactTextString(m) }
func (*Version) ProtoMessage() {}
func (*Version) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *Version) GetMajor() int32 {
if m != nil && m.Major != nil {
return *m.Major
}
return 0
}
func (m *Version) GetMinor() int32 {
if m != nil && m.Minor != nil {
return *m.Minor
}
return 0
}
func (m *Version) GetPatch() int32 {
if m != nil && m.Patch != nil {
return *m.Patch
}
return 0
}
func (m *Version) GetSuffix() string {
if m != nil && m.Suffix != nil {
return *m.Suffix
}
return ""
}
// An encoded CodeGeneratorRequest is written to the plugin's stdin.
type CodeGeneratorRequest struct {
// The .proto files that were explicitly listed on the command-line. The
// code generator should generate code only for these files. Each file's
// descriptor will be included in proto_file, below.
FileToGenerate []string `protobuf:"bytes,1,rep,name=file_to_generate,json=fileToGenerate" json:"file_to_generate,omitempty"`
// The generator parameter passed on the command-line.
Parameter *string `protobuf:"bytes,2,opt,name=parameter" json:"parameter,omitempty"`
// FileDescriptorProtos for all files in files_to_generate and everything
// they import. The files will appear in topological order, so each file
// appears before any file that imports it.
//
// protoc guarantees that all proto_files will be written after
// the fields above, even though this is not technically guaranteed by the
// protobuf wire format. This theoretically could allow a plugin to stream
// in the FileDescriptorProtos and handle them one by one rather than read
// the entire set into memory at once. However, as of this writing, this
// is not similarly optimized on protoc's end -- it will store all fields in
// memory at once before sending them to the plugin.
//
// Type names of fields and extensions in the FileDescriptorProto are always
// fully qualified.
ProtoFile []*google_protobuf.FileDescriptorProto `protobuf:"bytes,15,rep,name=proto_file,json=protoFile" json:"proto_file,omitempty"`
// The version number of protocol compiler.
CompilerVersion *Version `protobuf:"bytes,3,opt,name=compiler_version,json=compilerVersion" json:"compiler_version,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *CodeGeneratorRequest) Reset() { *m = CodeGeneratorRequest{} }
func (m *CodeGeneratorRequest) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorRequest) ProtoMessage() {}
func (*CodeGeneratorRequest) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *CodeGeneratorRequest) GetFileToGenerate() []string {
if m != nil {
return m.FileToGenerate
}
return nil
}
func (m *CodeGeneratorRequest) GetParameter() string {
if m != nil && m.Parameter != nil {
return *m.Parameter
}
return ""
}
func (m *CodeGeneratorRequest) GetProtoFile() []*google_protobuf.FileDescriptorProto {
if m != nil {
return m.ProtoFile
}
return nil
}
func (m *CodeGeneratorRequest) GetCompilerVersion() *Version {
if m != nil {
return m.CompilerVersion
}
return nil
}
// The plugin writes an encoded CodeGeneratorResponse to stdout.
type CodeGeneratorResponse struct {
// Error message. If non-empty, code generation failed. The plugin process
// should exit with status code zero even if it reports an error in this way.
//
// This should be used to indicate errors in .proto files which prevent the
// code generator from generating correct code. Errors which indicate a
// problem in protoc itself -- such as the input CodeGeneratorRequest being
// unparseable -- should be reported by writing a message to stderr and
// exiting with a non-zero status code.
Error *string `protobuf:"bytes,1,opt,name=error" json:"error,omitempty"`
File []*CodeGeneratorResponse_File `protobuf:"bytes,15,rep,name=file" json:"file,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *CodeGeneratorResponse) Reset() { *m = CodeGeneratorResponse{} }
func (m *CodeGeneratorResponse) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorResponse) ProtoMessage() {}
func (*CodeGeneratorResponse) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *CodeGeneratorResponse) GetError() string {
if m != nil && m.Error != nil {
return *m.Error
}
return ""
}
func (m *CodeGeneratorResponse) GetFile() []*CodeGeneratorResponse_File {
if m != nil {
return m.File
}
return nil
}
// Represents a single generated file.
type CodeGeneratorResponse_File struct {
// The file name, relative to the output directory. The name must not
// contain "." or ".." components and must be relative, not be absolute (so,
// the file cannot lie outside the output directory). "/" must be used as
// the path separator, not "\".
//
// If the name is omitted, the content will be appended to the previous
// file. This allows the generator to break large files into small chunks,
// and allows the generated text to be streamed back to protoc so that large
// files need not reside completely in memory at one time. Note that as of
// this writing protoc does not optimize for this -- it will read the entire
// CodeGeneratorResponse before writing files to disk.
Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
// If non-empty, indicates that the named file should already exist, and the
// content here is to be inserted into that file at a defined insertion
// point. This feature allows a code generator to extend the output
// produced by another code generator. The original generator may provide
// insertion points by placing special annotations in the file that look
// like:
// @@protoc_insertion_point(NAME)
// The annotation can have arbitrary text before and after it on the line,
// which allows it to be placed in a comment. NAME should be replaced with
// an identifier naming the point -- this is what other generators will use
// as the insertion_point. Code inserted at this point will be placed
// immediately above the line containing the insertion point (thus multiple
// insertions to the same point will come out in the order they were added).
// The double-@ is intended to make it unlikely that the generated code
// could contain things that look like insertion points by accident.
//
// For example, the C++ code generator places the following line in the
// .pb.h files that it generates:
// // @@protoc_insertion_point(namespace_scope)
// This line appears within the scope of the file's package namespace, but
// outside of any particular class. Another plugin can then specify the
// insertion_point "namespace_scope" to generate additional classes or
// other declarations that should be placed in this scope.
//
// Note that if the line containing the insertion point begins with
// whitespace, the same whitespace will be added to every line of the
// inserted text. This is useful for languages like Python, where
// indentation matters. In these languages, the insertion point comment
// should be indented the same amount as any inserted code will need to be
// in order to work correctly in that context.
//
// The code generator that generates the initial file and the one which
// inserts into it must both run as part of a single invocation of protoc.
// Code generators are executed in the order in which they appear on the
// command line.
//
// If |insertion_point| is present, |name| must also be present.
InsertionPoint *string `protobuf:"bytes,2,opt,name=insertion_point,json=insertionPoint" json:"insertion_point,omitempty"`
// The file contents.
Content *string `protobuf:"bytes,15,opt,name=content" json:"content,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (m *CodeGeneratorResponse_File) Reset() { *m = CodeGeneratorResponse_File{} }
func (m *CodeGeneratorResponse_File) String() string { return proto.CompactTextString(m) }
func (*CodeGeneratorResponse_File) ProtoMessage() {}
func (*CodeGeneratorResponse_File) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2, 0} }
func (m *CodeGeneratorResponse_File) GetName() string {
if m != nil && m.Name != nil {
return *m.Name
}
return ""
}
func (m *CodeGeneratorResponse_File) GetInsertionPoint() string {
if m != nil && m.InsertionPoint != nil {
return *m.InsertionPoint
}
return ""
}
func (m *CodeGeneratorResponse_File) GetContent() string {
if m != nil && m.Content != nil {
return *m.Content
}
return ""
}
func init() {
proto.RegisterType((*Version)(nil), "google.protobuf.compiler.Version")
proto.RegisterType((*CodeGeneratorRequest)(nil), "google.protobuf.compiler.CodeGeneratorRequest")
proto.RegisterType((*CodeGeneratorResponse)(nil), "google.protobuf.compiler.CodeGeneratorResponse")
proto.RegisterType((*CodeGeneratorResponse_File)(nil), "google.protobuf.compiler.CodeGeneratorResponse.File")
}
func init() { proto.RegisterFile("google/protobuf/compiler/plugin.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 417 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x74, 0x92, 0xcf, 0x6a, 0x14, 0x41,
0x10, 0xc6, 0x19, 0x77, 0x63, 0x98, 0x8a, 0x64, 0x43, 0x13, 0xa5, 0x09, 0x39, 0x8c, 0x8b, 0xe2,
0x5c, 0x32, 0x0b, 0xc1, 0x8b, 0x78, 0x4b, 0x44, 0x3d, 0x78, 0x58, 0x1a, 0xf1, 0x20, 0xc8, 0x30,
0x99, 0xd4, 0x74, 0x5a, 0x66, 0xba, 0xc6, 0xee, 0x1e, 0xf1, 0x49, 0x7d, 0x0f, 0xdf, 0x40, 0xfa,
0xcf, 0x24, 0xb2, 0xb8, 0xa7, 0xee, 0xef, 0x57, 0xd5, 0xd5, 0x55, 0x1f, 0x05, 0x2f, 0x25, 0x91,
0xec, 0x71, 0x33, 0x1a, 0x72, 0x74, 0x33, 0x75, 0x9b, 0x96, 0x86, 0x51, 0xf5, 0x68, 0x36, 0x63,
0x3f, 0x49, 0xa5, 0xab, 0x10, 0x60, 0x3c, 0xa6, 0x55, 0x73, 0x5a, 0x35, 0xa7, 0x9d, 0x15, 0xbb,
0x05, 0x6e, 0xd1, 0xb6, 0x46, 0x8d, 0x8e, 0x4c, 0xcc, 0x5e, 0xb7, 0x70, 0xf8, 0x05, 0x8d, 0x55,
0xa4, 0xd9, 0x29, 0x1c, 0x0c, 0xcd, 0x77, 0x32, 0x3c, 0x2b, 0xb2, 0xf2, 0x40, 0x44, 0x11, 0xa8,
0xd2, 0x64, 0xf8, 0xa3, 0x44, 0xbd, 0xf0, 0x74, 0x6c, 0x5c, 0x7b, 0xc7, 0x17, 0x91, 0x06, 0xc1,
0x9e, 0xc1, 0x63, 0x3b, 0x75, 0x9d, 0xfa, 0xc5, 0x97, 0x45, 0x56, 0xe6, 0x22, 0xa9, 0xf5, 0x9f,
0x0c, 0x4e, 0xaf, 0xe9, 0x16, 0x3f, 0xa0, 0x46, 0xd3, 0x38, 0x32, 0x02, 0x7f, 0x4c, 0x68, 0x1d,
0x2b, 0xe1, 0xa4, 0x53, 0x3d, 0xd6, 0x8e, 0x6a, 0x19, 0x63, 0xc8, 0xb3, 0x62, 0x51, 0xe6, 0xe2,
0xd8, 0xf3, 0xcf, 0x94, 0x5e, 0x20, 0x3b, 0x87, 0x7c, 0x6c, 0x4c, 0x33, 0xa0, 0xc3, 0xd8, 0x4a,
0x2e, 0x1e, 0x00, 0xbb, 0x06, 0x08, 0xe3, 0xd4, 0xfe, 0x15, 0x5f, 0x15, 0x8b, 0xf2, 0xe8, 0xf2,
0x45, 0xb5, 0x6b, 0xcb, 0x7b, 0xd5, 0xe3, 0xbb, 0x7b, 0x03, 0xb6, 0x1e, 0x8b, 0x3c, 0x44, 0x7d,
0x84, 0x7d, 0x82, 0x93, 0xd9, 0xb8, 0xfa, 0x67, 0xf4, 0x24, 0x8c, 0x77, 0x74, 0xf9, 0xbc, 0xda,
0xe7, 0x70, 0x95, 0xcc, 0x13, 0xab, 0x99, 0x24, 0xb0, 0xfe, 0x9d, 0xc1, 0xd3, 0x9d, 0x99, 0xed,
0x48, 0xda, 0xa2, 0xf7, 0x0e, 0x8d, 0x49, 0x3e, 0xe7, 0x22, 0x0a, 0xf6, 0x11, 0x96, 0xff, 0x34,
0xff, 0x7a, 0xff, 0x8f, 0xff, 0x2d, 0x1a, 0x66, 0x13, 0xa1, 0xc2, 0xd9, 0x37, 0x58, 0x86, 0x79,
0x18, 0x2c, 0x75, 0x33, 0x60, 0xfa, 0x26, 0xdc, 0xd9, 0x2b, 0x58, 0x29, 0x6d, 0xd1, 0x38, 0x45,
0xba, 0x1e, 0x49, 0x69, 0x97, 0xcc, 0x3c, 0xbe, 0xc7, 0x5b, 0x4f, 0x19, 0x87, 0xc3, 0x96, 0xb4,
0x43, 0xed, 0xf8, 0x2a, 0x24, 0xcc, 0xf2, 0x4a, 0xc2, 0x79, 0x4b, 0xc3, 0xde, 0xfe, 0xae, 0x9e,
0x6c, 0xc3, 0x6e, 0x06, 0x7b, 0xed, 0xd7, 0x37, 0x52, 0xb9, 0xbb, 0xe9, 0xc6, 0x87, 0x37, 0x92,
0xfa, 0x46, 0xcb, 0x87, 0x65, 0x0c, 0x97, 0xf6, 0x42, 0xa2, 0xbe, 0x90, 0x94, 0x56, 0xfa, 0x6d,
0x3c, 0x6a, 0x49, 0x7f, 0x03, 0x00, 0x00, 0xff, 0xff, 0xf7, 0x15, 0x40, 0xc5, 0xfe, 0x02, 0x00,
0x00,
}

View File

@ -0,0 +1,83 @@
// Code generated by protoc-gen-go.
// source: google/protobuf/compiler/plugin.proto
// DO NOT EDIT!
package google_protobuf_compiler
import proto "github.com/golang/protobuf/proto"
import "math"
import google_protobuf "github.com/golang/protobuf/protoc-gen-go/descriptor"
// Reference proto and math imports to suppress error if they are not otherwise used.
var _ = proto.GetString
var _ = math.Inf
type CodeGeneratorRequest struct {
FileToGenerate []string `protobuf:"bytes,1,rep,name=file_to_generate" json:"file_to_generate,omitempty"`
Parameter *string `protobuf:"bytes,2,opt,name=parameter" json:"parameter,omitempty"`
ProtoFile []*google_protobuf.FileDescriptorProto `protobuf:"bytes,15,rep,name=proto_file" json:"proto_file,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (this *CodeGeneratorRequest) Reset() { *this = CodeGeneratorRequest{} }
func (this *CodeGeneratorRequest) String() string { return proto.CompactTextString(this) }
func (*CodeGeneratorRequest) ProtoMessage() {}
func (this *CodeGeneratorRequest) GetParameter() string {
if this != nil && this.Parameter != nil {
return *this.Parameter
}
return ""
}
type CodeGeneratorResponse struct {
Error *string `protobuf:"bytes,1,opt,name=error" json:"error,omitempty"`
File []*CodeGeneratorResponse_File `protobuf:"bytes,15,rep,name=file" json:"file,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (this *CodeGeneratorResponse) Reset() { *this = CodeGeneratorResponse{} }
func (this *CodeGeneratorResponse) String() string { return proto.CompactTextString(this) }
func (*CodeGeneratorResponse) ProtoMessage() {}
func (this *CodeGeneratorResponse) GetError() string {
if this != nil && this.Error != nil {
return *this.Error
}
return ""
}
type CodeGeneratorResponse_File struct {
Name *string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
InsertionPoint *string `protobuf:"bytes,2,opt,name=insertion_point" json:"insertion_point,omitempty"`
Content *string `protobuf:"bytes,15,opt,name=content" json:"content,omitempty"`
XXX_unrecognized []byte `json:"-"`
}
func (this *CodeGeneratorResponse_File) Reset() { *this = CodeGeneratorResponse_File{} }
func (this *CodeGeneratorResponse_File) String() string { return proto.CompactTextString(this) }
func (*CodeGeneratorResponse_File) ProtoMessage() {}
func (this *CodeGeneratorResponse_File) GetName() string {
if this != nil && this.Name != nil {
return *this.Name
}
return ""
}
func (this *CodeGeneratorResponse_File) GetInsertionPoint() string {
if this != nil && this.InsertionPoint != nil {
return *this.InsertionPoint
}
return ""
}
func (this *CodeGeneratorResponse_File) GetContent() string {
if this != nil && this.Content != nil {
return *this.Content
}
return ""
}
func init() {
}

View File

@ -0,0 +1,167 @@
// Protocol Buffers - Google's data interchange format
// Copyright 2008 Google Inc. All rights reserved.
// https://developers.google.com/protocol-buffers/
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
// Author: kenton@google.com (Kenton Varda)
//
// WARNING: The plugin interface is currently EXPERIMENTAL and is subject to
// change.
//
// protoc (aka the Protocol Compiler) can be extended via plugins. A plugin is
// just a program that reads a CodeGeneratorRequest from stdin and writes a
// CodeGeneratorResponse to stdout.
//
// Plugins written using C++ can use google/protobuf/compiler/plugin.h instead
// of dealing with the raw protocol defined here.
//
// A plugin executable needs only to be placed somewhere in the path. The
// plugin should be named "protoc-gen-$NAME", and will then be used when the
// flag "--${NAME}_out" is passed to protoc.
syntax = "proto2";
package google.protobuf.compiler;
option java_package = "com.google.protobuf.compiler";
option java_outer_classname = "PluginProtos";
option go_package = "github.com/golang/protobuf/protoc-gen-go/plugin;plugin_go";
import "google/protobuf/descriptor.proto";
// The version number of protocol compiler.
message Version {
optional int32 major = 1;
optional int32 minor = 2;
optional int32 patch = 3;
// A suffix for alpha, beta or rc release, e.g., "alpha-1", "rc2". It should
// be empty for mainline stable releases.
optional string suffix = 4;
}
// An encoded CodeGeneratorRequest is written to the plugin's stdin.
message CodeGeneratorRequest {
// The .proto files that were explicitly listed on the command-line. The
// code generator should generate code only for these files. Each file's
// descriptor will be included in proto_file, below.
repeated string file_to_generate = 1;
// The generator parameter passed on the command-line.
optional string parameter = 2;
// FileDescriptorProtos for all files in files_to_generate and everything
// they import. The files will appear in topological order, so each file
// appears before any file that imports it.
//
// protoc guarantees that all proto_files will be written after
// the fields above, even though this is not technically guaranteed by the
// protobuf wire format. This theoretically could allow a plugin to stream
// in the FileDescriptorProtos and handle them one by one rather than read
// the entire set into memory at once. However, as of this writing, this
// is not similarly optimized on protoc's end -- it will store all fields in
// memory at once before sending them to the plugin.
//
// Type names of fields and extensions in the FileDescriptorProto are always
// fully qualified.
repeated FileDescriptorProto proto_file = 15;
// The version number of protocol compiler.
optional Version compiler_version = 3;
}
// The plugin writes an encoded CodeGeneratorResponse to stdout.
message CodeGeneratorResponse {
// Error message. If non-empty, code generation failed. The plugin process
// should exit with status code zero even if it reports an error in this way.
//
// This should be used to indicate errors in .proto files which prevent the
// code generator from generating correct code. Errors which indicate a
// problem in protoc itself -- such as the input CodeGeneratorRequest being
// unparseable -- should be reported by writing a message to stderr and
// exiting with a non-zero status code.
optional string error = 1;
// Represents a single generated file.
message File {
// The file name, relative to the output directory. The name must not
// contain "." or ".." components and must be relative, not be absolute (so,
// the file cannot lie outside the output directory). "/" must be used as
// the path separator, not "\".
//
// If the name is omitted, the content will be appended to the previous
// file. This allows the generator to break large files into small chunks,
// and allows the generated text to be streamed back to protoc so that large
// files need not reside completely in memory at one time. Note that as of
// this writing protoc does not optimize for this -- it will read the entire
// CodeGeneratorResponse before writing files to disk.
optional string name = 1;
// If non-empty, indicates that the named file should already exist, and the
// content here is to be inserted into that file at a defined insertion
// point. This feature allows a code generator to extend the output
// produced by another code generator. The original generator may provide
// insertion points by placing special annotations in the file that look
// like:
// @@protoc_insertion_point(NAME)
// The annotation can have arbitrary text before and after it on the line,
// which allows it to be placed in a comment. NAME should be replaced with
// an identifier naming the point -- this is what other generators will use
// as the insertion_point. Code inserted at this point will be placed
// immediately above the line containing the insertion point (thus multiple
// insertions to the same point will come out in the order they were added).
// The double-@ is intended to make it unlikely that the generated code
// could contain things that look like insertion points by accident.
//
// For example, the C++ code generator places the following line in the
// .pb.h files that it generates:
// // @@protoc_insertion_point(namespace_scope)
// This line appears within the scope of the file's package namespace, but
// outside of any particular class. Another plugin can then specify the
// insertion_point "namespace_scope" to generate additional classes or
// other declarations that should be placed in this scope.
//
// Note that if the line containing the insertion point begins with
// whitespace, the same whitespace will be added to every line of the
// inserted text. This is useful for languages like Python, where
// indentation matters. In these languages, the insertion point comment
// should be indented the same amount as any inserted code will need to be
// in order to work correctly in that context.
//
// The code generator that generates the initial file and the one which
// inserts into it must both run as part of a single invocation of protoc.
// Code generators are executed in the order in which they appear on the
// command line.
//
// If |insertion_point| is present, |name| must also be present.
optional string insertion_point = 2;
// The file contents.
optional string content = 15;
}
repeated File file = 15;
}

View File

@ -0,0 +1,73 @@
# Go support for Protocol Buffers - Google's data interchange format
#
# Copyright 2010 The Go Authors. All rights reserved.
# https://github.com/golang/protobuf
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
all:
@echo run make test
include ../../Make.protobuf
test: golden testbuild
#test: golden testbuild extension_test
# ./extension_test
# @echo PASS
my_test/test.pb.go: my_test/test.proto
protoc --go_out=Mmulti/multi1.proto=github.com/golang/protobuf/protoc-gen-go/testdata/multi:. $<
golden:
make -B my_test/test.pb.go
sed -i -e '/return.*fileDescriptor/d' my_test/test.pb.go
sed -i -e '/^var fileDescriptor/,/^}/d' my_test/test.pb.go
sed -i -e '/proto.RegisterFile.*fileDescriptor/d' my_test/test.pb.go
gofmt -w my_test/test.pb.go
diff -w my_test/test.pb.go my_test/test.pb.go.golden
nuke: clean
testbuild: regenerate
go test
regenerate:
# Invoke protoc once to generate three independent .pb.go files in the same package.
protoc --go_out=. multi/multi1.proto multi/multi2.proto multi/multi3.proto
#extension_test: extension_test.$O
# $(LD) -L. -o $@ $<
#multi.a: multi3.pb.$O multi2.pb.$O multi1.pb.$O
# rm -f multi.a
# $(QUOTED_GOBIN)/gopack grc $@ $<
#test.pb.go: imp.pb.go
#multi1.pb.go: multi2.pb.go multi3.pb.go
#main.$O: imp.pb.$O test.pb.$O multi.a
#extension_test.$O: extension_base.pb.$O extension_extra.pb.$O extension_user.pb.$O

View File

@ -0,0 +1,46 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2010 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax = "proto2";
package extension_base;
message BaseMessage {
optional int32 height = 1;
extensions 4 to 9;
extensions 16 to max;
}
// Another message that may be extended, using message_set_wire_format.
message OldStyleMessage {
option message_set_wire_format = true;
extensions 100 to max;
}

View File

@ -0,0 +1,38 @@
// Go support for Protocol Buffers - Google's data interchange format
//
// Copyright 2011 The Go Authors. All rights reserved.
// https://github.com/golang/protobuf
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
//
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Google Inc. nor the names of its
// contributors may be used to endorse or promote products derived from
// this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
syntax = "proto2";
package extension_extra;
message ExtraMessage {
optional int32 width = 1;
}

Some files were not shown because too many files have changed in this diff Show More