Sign Up
Log In
Log In
or
Sign Up
Places
All Projects
Status Monitor
Collapse sidebar
home:ojkastl_buildservice:Branch_devel_kubic
rancher-cli
rancher-cli-2.10.0.obscpio
Overview
Repositories
Revisions
Requests
Users
Attributes
Meta
File rancher-cli-2.10.0.obscpio of Package rancher-cli
07070100000000000081A4000000000000000000000001673C86850000002E000000000000000000000000000000000000002100000000rancher-cli-2.10.0/.dockerignore./bin ./build ./.dapper ./dist ./.trash-cache 07070100000001000041ED000000000000000000000002673C868500000000000000000000000000000000000000000000001B00000000rancher-cli-2.10.0/.github07070100000002000041ED000000000000000000000002673C868500000000000000000000000000000000000000000000002500000000rancher-cli-2.10.0/.github/workflows07070100000003000081A4000000000000000000000001673C868500000485000000000000000000000000000000000000002C00000000rancher-cli-2.10.0/.github/workflows/ci.ymlname: CI on: workflow_dispatch: push: pull_request: jobs: build: runs-on: ubuntu-latest steps: - name: Checkout Repo uses: actions/checkout@v3 - name: Set up Go uses: actions/setup-go@v5 with: go-version-file: go.mod cache: false - name: Lint uses: golangci/golangci-lint-action@v4 - name: Validate Go modules run: ./scripts/validate - name: Test run: ./scripts/test - name: Get Tag if: startsWith(github.ref, 'refs/tags/v') run: echo "GITHUB_TAG=$GITHUB_REF_NAME" >> $GITHUB_ENV - name: Build env: CROSS: 1 run: ./scripts/build - name: Package run: | ./scripts/package ls -lR dist/artifacts # Stage binary for packaging step cp -r ./bin/* ./package/ # Export the tag for the next step source ./scripts/version echo "VERSION=$VERSION" echo "VERSION=$VERSION" >> $GITHUB_ENV - name: Docker Build uses: docker/build-push-action@v5 with: push: false context: package tags: rancher/cli2:${{ env.VERSION }} 07070100000004000081A4000000000000000000000001673C868500000280000000000000000000000000000000000000002F00000000rancher-cli-2.10.0/.github/workflows/fossa.ymlname: FOSSA on: workflow_dispatch: push: tags: - v* branches: - v* - main jobs: fossa: runs-on: ubuntu-latest permissions: contents: read id-token: write # needed for the Vault authentication steps: - name: Checkout Repo uses: actions/checkout@v3 - name: Load Secrets from Vault uses: rancher-eio/read-vault-secrets@main with: secrets: | secret/data/github/org/rancher/fossa/push token | FOSSA - name: Check FOSSA compliance uses: fossas/fossa-action@main with: api-key: ${{ env.FOSSA }} 07070100000005000081A4000000000000000000000001673C868500000B35000000000000000000000000000000000000003100000000rancher-cli-2.10.0/.github/workflows/release.ymlname: Release on: push: tags: - v* jobs: release: permissions: contents: write # needed to create/update the release with the assets id-token: write # needed for the Vault authentication runs-on: ubuntu-latest steps: - name: Checkout Repo uses: actions/checkout@v3 - name: Load Secrets from Vault uses: rancher-eio/read-vault-secrets@main with: secrets: | secret/data/github/repo/${{ github.repository }}/dockerhub/rancher/credentials username | DOCKER_USERNAME ; secret/data/github/repo/${{ github.repository }}/dockerhub/rancher/credentials password | DOCKER_PASSWORD ; secret/data/github/repo/${{ github.repository }}/google-auth/rancher/credentials token | GOOGLE_AUTH ; - name: Login to Docker Hub uses: docker/login-action@v3 with: username: ${{ env.DOCKER_USERNAME }} password: ${{ env.DOCKER_PASSWORD }} - name: Authenticate to Google Cloud uses: google-github-actions/auth@v2 with: credentials_json: "${{ env.GOOGLE_AUTH }}" - name: Set up Go uses: actions/setup-go@v5 with: go-version-file: go.mod cache: false - name: Lint uses: golangci/golangci-lint-action@v4 - name: Validate Go modules run: ./scripts/validate - name: Test run: ./scripts/test - name: Get Tag if: startsWith(github.ref, 'refs/tags/v') run: echo "GITHUB_TAG=$GITHUB_REF_NAME" >> $GITHUB_ENV - name: Build env: CROSS: 1 run: ./scripts/build - name: Package run: | ./scripts/package ls -lR dist/artifacts # Stage binary for packaging step cp -r ./bin/* ./package/ # Export the tag for the next step source ./scripts/version echo "VERSION=$VERSION" echo "VERSION=$VERSION" >> $GITHUB_ENV - name: Upload Release assets env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | cd dist/artifacts/$VERSION ls -lR # generate sha256sum file find . -maxdepth 1 -type f ! -name sha256sum.txt -printf '%P\0' | xargs -0 sha256sum > sha256sum.txt gh release upload $VERSION *.txt *.xz *.gz *.zip - name: Upload Release assets to Google Cloud uses: google-github-actions/upload-cloud-storage@v2 with: path: dist/artifacts/${{ env.VERSION }} destination: releases.rancher.com/cli2/${{ env.VERSION }} glob: '*.*' # copy only the files in the path folder parent: false process_gcloudignore: false headers: |- cache-control: public,max-age=3600 - name: Docker Build uses: docker/build-push-action@v5 with: push: true context: package tags: rancher/cli2:${{ env.VERSION }} 07070100000006000081A4000000000000000000000001673C868500000046000000000000000000000000000000000000001E00000000rancher-cli-2.10.0/.gitignore/.dapper /bin /build /dist *.swp /.trash-cache /.idea trash.lock /cli 07070100000007000081A4000000000000000000000001673C868500000079000000000000000000000000000000000000002200000000rancher-cli-2.10.0/.golangci.json{ "linters": { "enable": [ "gofmt" ] }, "run": { "timeout": "10m" } }07070100000008000081A4000000000000000000000001673C868500000018000000000000000000000000000000000000001E00000000rancher-cli-2.10.0/CODEOWNERS* @rancher/collie 07070100000009000081A4000000000000000000000001673C8685000027BE000000000000000000000000000000000000001B00000000rancher-cli-2.10.0/LICENSE Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS 0707010000000A000081A4000000000000000000000001673C868500000064000000000000000000000000000000000000001C00000000rancher-cli-2.10.0/MakefileTARGETS := $(shell ls scripts) $(TARGETS): @./scripts/$@ .DEFAULT_GOAL := ci .PHONY: $(TARGETS) 0707010000000B000081A4000000000000000000000001673C868500000A35000000000000000000000000000000000000001D00000000rancher-cli-2.10.0/README.mdRancher CLI =========== The Rancher Command Line Interface (CLI) is a unified tool for interacting with your Rancher Server. For usage information see: https://rancher.com/docs/rancher/v2.x/en/cli/ > **Note:** This is for version 2.x.x of the cli, for info on 1.6.x see [here](https://github.com/rancher/cli/tree/v1.6) ## Installing Check the [releases page](https://github.com/rancher/cli/releases) for direct downloads of the binary. After you download it, you can add it to your `$PATH` or [build your own from source](#building-from-source). ## Setting up Rancher CLI with a Rancher Server The CLI requires your Rancher Server address, along with [credentials for authentication](https://rancher.com/docs/rancher/v2.x/en/user-settings/api-keys/). Rancher CLI pulls this information from a JSON file, `cli2.json`, which is created the first time you run `rancher login`. By default, the path of this file is `~/.rancher/cli2.json`. ``` $ rancher login https://<RANCHER_SERVER_URL> -t my-secret-token ``` > **Note:** When entering your `<RANCHER_SERVER_URL>`, include the port that was exposed while you installed Rancher Server. ## Usage Run `rancher --help` for a list of available commands. ## Building from Source The binaries will be located in `/bin`. ### Linux Binary Run `make build`. ### Mac Binary Run `CROSS=1 make build`. ## Docker Image Run `docker run --rm -it -v <PATH_TO_CONFIG>:/root/.rancher/cli2.json rancher/cli2 [ARGS]`. Pass credentials by replacing `<PATH_TO_CONFIG>` with your config file for the server. To build `rancher/cli`, run `make`. To use a custom Docker repository, do `REPO=custom make`, which produces a `custom/cli` image. ## Contact For bugs, questions, comments, corrections, suggestions, etc., open an issue in [rancher/rancher](//github.com/rancher/rancher/issues) with a title prefix of `[cli] `. Or just [click here](//github.com/rancher/rancher/issues/new?title=%5Bcli%5D%20) to create a new issue. ## License Copyright (c) 2014-2019 [Rancher Labs, Inc.](http://rancher.com) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at [http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0) Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. 0707010000000C000041ED000000000000000000000002673C868500000000000000000000000000000000000000000000001D00000000rancher-cli-2.10.0/cliclient0707010000000D000081A4000000000000000000000001673C86850000151B000000000000000000000000000000000000002A00000000rancher-cli-2.10.0/cliclient/cliclient.gopackage cliclient import ( "errors" "fmt" "strings" errorsPkg "github.com/pkg/errors" "github.com/rancher/cli/config" "github.com/rancher/norman/clientbase" ntypes "github.com/rancher/norman/types" capiClient "github.com/rancher/rancher/pkg/client/generated/cluster/v1beta1" clusterClient "github.com/rancher/rancher/pkg/client/generated/cluster/v3" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" projectClient "github.com/rancher/rancher/pkg/client/generated/project/v3" "github.com/sirupsen/logrus" "golang.org/x/sync/errgroup" ) type MasterClient struct { ClusterClient *clusterClient.Client ManagementClient *managementClient.Client ProjectClient *projectClient.Client UserConfig *config.ServerConfig CAPIClient *capiClient.Client } // NewMasterClient returns a new MasterClient with Cluster, Management and Project // clients populated func NewMasterClient(config *config.ServerConfig) (*MasterClient, error) { mc := &MasterClient{ UserConfig: config, } clustProj := CheckProject(mc.UserConfig.Project) if clustProj == nil { logrus.Warn("No context set; some commands will not work. Run `rancher login` again.") } var g errgroup.Group g.Go(mc.newManagementClient) g.Go(mc.newClusterClient) g.Go(mc.newProjectClient) g.Go(mc.newCAPIClient) if err := g.Wait(); err != nil { return nil, err } return mc, nil } // NewManagementClient returns a new MasterClient with only the Management client func NewManagementClient(config *config.ServerConfig) (*MasterClient, error) { mc := &MasterClient{ UserConfig: config, } err := mc.newManagementClient() if err != nil { return nil, err } return mc, nil } // NewClusterClient returns a new MasterClient with only the Cluster client func NewClusterClient(config *config.ServerConfig) (*MasterClient, error) { clustProj := CheckProject(config.Project) if clustProj == nil { return nil, errors.New("no context set") } mc := &MasterClient{ UserConfig: config, } err := mc.newClusterClient() if err != nil { return nil, err } return mc, nil } // NewProjectClient returns a new MasterClient with only the Project client func NewProjectClient(config *config.ServerConfig) (*MasterClient, error) { clustProj := CheckProject(config.Project) if clustProj == nil { return nil, errors.New("no context set") } mc := &MasterClient{ UserConfig: config, } err := mc.newProjectClient() if err != nil { return nil, err } return mc, nil } func (mc *MasterClient) newManagementClient() error { options := createClientOpts(mc.UserConfig) // Setup the management client mClient, err := managementClient.NewClient(options) if err != nil { return err } mc.ManagementClient = mClient return nil } func (mc *MasterClient) newClusterClient() error { options := createClientOpts(mc.UserConfig) options.URL = options.URL + "/clusters/" + mc.UserConfig.FocusedCluster() // Setup the project client cc, err := clusterClient.NewClient(options) if err != nil { if clientbase.IsNotFound(err) { err = errorsPkg.WithMessage(err, "Current cluster not available, try running `rancher context switch`. Error") } return err } mc.ClusterClient = cc return nil } func (mc *MasterClient) newProjectClient() error { options := createClientOpts(mc.UserConfig) options.URL = options.URL + "/projects/" + mc.UserConfig.Project // Setup the project client pc, err := projectClient.NewClient(options) if err != nil { if clientbase.IsNotFound(err) { err = errorsPkg.WithMessage(err, "Current project not available, try running `rancher context switch`. Error") } return err } mc.ProjectClient = pc return nil } func (mc *MasterClient) newCAPIClient() error { options := createClientOpts(mc.UserConfig) options.URL = strings.TrimSuffix(options.URL, "/v3") + "/v1" // Setup the CAPI client cc, err := capiClient.NewClient(options) if err != nil { return err } mc.CAPIClient = cc return nil } func (mc *MasterClient) ByID(resource *ntypes.Resource, respObject interface{}) error { if strings.HasPrefix(resource.Type, "cluster.x-k8s.io") { return mc.CAPIClient.ByID(resource.Type, resource.ID, &respObject) } else if _, ok := mc.ManagementClient.APIBaseClient.Types[resource.Type]; ok { return mc.ManagementClient.ByID(resource.Type, resource.ID, &respObject) } else if _, ok := mc.ProjectClient.APIBaseClient.Types[resource.Type]; ok { return mc.ProjectClient.ByID(resource.Type, resource.ID, &respObject) } else if _, ok := mc.ClusterClient.APIBaseClient.Types[resource.Type]; ok { return mc.ClusterClient.ByID(resource.Type, resource.ID, &respObject) } return fmt.Errorf("MasterClient - unknown resource type %v", resource.Type) } func createClientOpts(config *config.ServerConfig) *clientbase.ClientOpts { serverURL := config.URL if !strings.HasSuffix(serverURL, "/v3") { serverURL = config.URL + "/v3" } options := &clientbase.ClientOpts{ URL: serverURL, AccessKey: config.AccessKey, SecretKey: config.SecretKey, CACerts: config.CACerts, } return options } func SplitOnColon(s string) []string { return strings.Split(s, ":") } // CheckProject verifies s matches the valid project ID of <cluster>:<project> func CheckProject(s string) []string { clustProj := SplitOnColon(s) if len(s) == 0 || len(clustProj) != 2 { return nil } return clustProj } 0707010000000E000041ED000000000000000000000002673C868500000000000000000000000000000000000000000000001700000000rancher-cli-2.10.0/cmd0707010000000F000081A4000000000000000000000001673C868500009301000000000000000000000000000000000000001E00000000rancher-cli-2.10.0/cmd/app.gopackage cmd import ( "bufio" "encoding/base64" "encoding/json" "fmt" "net/http" "net/url" "os" "path/filepath" "sort" "strings" "time" gover "github.com/hashicorp/go-version" "github.com/pkg/errors" "github.com/rancher/cli/cliclient" "github.com/rancher/norman/clientbase" clusterClient "github.com/rancher/rancher/pkg/client/generated/cluster/v3" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" projectClient "github.com/rancher/rancher/pkg/client/generated/project/v3" "github.com/sirupsen/logrus" "github.com/urfave/cli" "gopkg.in/yaml.v2" ) const ( installAppDescription = ` Install an app template in the current Rancher server. This defaults to the newest version of the app template. Specify a version using '--version' if required. The app will be installed into a new namespace unless '--namespace' is specified. Example: # Install the redis template without any options $ rancher app install redis appFoo # Block cli until installation has finished or encountered an error. Use after app install. $ rancher wait <app-id> # Install the local redis template folder without any options $ rancher app install ./redis appFoo # Install the redis template and specify an answers file location $ rancher app install --answers /example/answers.yaml redis appFoo # Install the redis template and set multiple answers and the version to install $ rancher app install --set foo=bar --set-string baz=bunk --version 1.0.1 redis appFoo # Install the redis template and specify the namespace for the app $ rancher app install --namespace bar redis appFoo ` upgradeAppDescription = ` Upgrade an existing app to a newer version via app template or app version in the current Rancher server. Example: # Upgrade the 'appFoo' app to latest version without any options $ rancher app upgrade appFoo latest # Upgrade the 'appFoo' app by local template folder without any options $ rancher app upgrade appFoo ./redis # Upgrade the 'appFoo' app and set multiple answers and the 0.2.0 version to install $ rancher app upgrade --set foo=bar --set-string baz=bunk appFoo 0.2.0 ` ) type AppData struct { ID string App projectClient.App Catalog string Template string Version string } type TemplateData struct { ID string Template managementClient.Template Category string } type VersionData struct { Current string Version string } type revision struct { Current string Name string Created time.Time Human string Catalog string Template string Version string } type chartVersion struct { chartMetadata `yaml:",inline"` Dir string `json:"-" yaml:"-"` URLs []string `json:"urls" yaml:"urls"` Digest string `json:"digest,omitempty" yaml:"digest,omitempty"` } type chartMetadata struct { Name string `json:"name,omitempty" yaml:"name,omitempty"` Sources []string `json:"sources,omitempty" yaml:"sources,omitempty"` Version string `json:"version,omitempty" yaml:"version,omitempty"` KubeVersion string `json:"kubeVersion,omitempty" yaml:"kubeVersion,omitempty"` Description string `json:"description,omitempty" yaml:"description,omitempty"` Keywords []string `json:"keywords,omitempty" yaml:"keywords,omitempty"` Icon string `json:"icon,omitempty" yaml:"icon,omitempty"` } type revSlice []revision func (s revSlice) Less(i, j int) bool { return s[i].Created.After(s[j].Created) } func (s revSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] } func (s revSlice) Len() int { return len(s) } func AppCommand() cli.Command { appLsFlags := []cli.Flag{ formatFlag, cli.BoolFlag{ Name: "quiet,q", Usage: "Only display IDs", }, } return cli.Command{ Name: "apps", Aliases: []string{"app"}, Usage: "Operations with apps. Uses helm. Flags prepended with \"helm\" can also be accurately described by helm documentation.", Action: defaultAction(appLs), Flags: appLsFlags, Subcommands: []cli.Command{ { Name: "ls", Usage: "List apps", Description: "\nList all apps in the current Rancher server", ArgsUsage: "None", Action: appLs, Flags: appLsFlags, }, { Name: "delete", Usage: "Delete an app", Action: appDelete, ArgsUsage: "[APP_NAME/APP_ID]", }, { Name: "install", Usage: "Install an app template", Description: installAppDescription, Action: templateInstall, ArgsUsage: "[TEMPLATE_NAME/TEMPLATE_PATH, APP_NAME]", Flags: []cli.Flag{ cli.StringFlag{ Name: "answers,a", Usage: "Path to an answers file, the format of the file is a map with key:value. This supports JSON and YAML.", }, cli.StringFlag{ Name: "values", Usage: "Path to a helm values file.", }, cli.StringFlag{ Name: "namespace,n", Usage: "Namespace to install the app into", }, cli.StringSliceFlag{ Name: "set", Usage: "Set answers for the template, can be used multiple times. Example: --set foo=bar", }, cli.StringSliceFlag{ Name: "set-string", Usage: "Set string answers for the template (Skips Helm's type conversion), can be used multiple times. Example: --set-string foo=bar", }, cli.StringFlag{ Name: "version", Usage: "Version of the template to use", }, cli.BoolFlag{ Name: "no-prompt", Usage: "Suppress asking questions and use the default values when required answers are not provided", }, cli.IntFlag{ Name: "helm-timeout", Usage: "Amount of time for helm to wait for k8s commands (default is 300 secs). Example: --helm-timeout 600", Value: 300, }, cli.BoolFlag{ Name: "helm-wait", Usage: "Helm will wait for as long as timeout value, for installed resources to be ready (pods, PVCs, deployments, etc.). Example: --helm-wait", }, }, }, { Name: "rollback", Usage: "Rollback an app to a previous version", Action: appRollback, ArgsUsage: "[APP_NAME/APP_ID, REVISION_ID/REVISION_NAME]", Flags: []cli.Flag{ cli.BoolFlag{ Name: "show-revisions,r", Usage: "Show revisions available to rollback to", }, cli.BoolFlag{ Name: "force,f", Usage: "Force rollback, deletes and recreates resources if needed during rollback. (default is false)", }, }, }, { Name: "upgrade", Usage: "Upgrade an existing app to a newer version", Description: upgradeAppDescription, Action: appUpgrade, ArgsUsage: "[APP_NAME/APP_ID VERSION/TEMPLATE_PATH]", Flags: []cli.Flag{ cli.StringFlag{ Name: "answers,a", Usage: "Path to an answers file, the format of the file is a map with key:value. Supports JSON and YAML", }, cli.StringFlag{ Name: "values", Usage: "Path to a helm values file.", }, cli.StringSliceFlag{ Name: "set", Usage: "Set answers for the template, can be used multiple times. Example: --set foo=bar", }, cli.StringSliceFlag{ Name: "set-string", Usage: "Set string answers for the template (Skips Helm's type conversion), can be used multiple times. Example: --set-string foo=bar", }, cli.BoolFlag{ Name: "show-versions,v", Usage: "Display versions available to upgrade to", }, cli.BoolFlag{ Name: "reset", Usage: "Reset all catalog app answers", }, cli.BoolFlag{ Name: "force,f", Usage: "Force upgrade, deletes and recreates resources if needed during upgrade. (default is false)", }, }, }, { Name: "list-templates", Aliases: []string{"lt"}, Usage: "List templates available for installation", Description: "\nList all app templates in the current Rancher server", ArgsUsage: "None", Action: templateLs, Flags: []cli.Flag{ formatFlag, cli.StringFlag{ Name: "catalog", Usage: "Specify the catalog to list templates for", }, }, }, { Name: "show-template", Aliases: []string{"st"}, Usage: "Show versions available to install for an app template", Description: "\nShow all available versions of an app template", ArgsUsage: "[TEMPLATE_ID]", Action: templateShow, }, { Name: "show-app", Aliases: []string{"sa"}, Usage: "Show an app's available versions and revisions", ArgsUsage: "[APP_NAME/APP_ID]", Action: showApp, Flags: []cli.Flag{ formatFlag, }, }, { Name: "show-notes", Usage: "Show contents of apps notes.txt", Action: appNotes, ArgsUsage: "[APP_NAME/APP_ID]", }, }, } } func appLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } collection, err := c.ProjectClient.App.List(defaultListOpts(ctx)) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "App.Name"}, {"STATE", "App.State"}, {"CATALOG", "Catalog"}, {"TEMPLATE", "Template"}, {"VERSION", "Version"}, }, ctx) defer writer.Close() for _, item := range collection.Data { appExternalID := item.ExternalID appTemplateFiles := make(map[string]string) if appExternalID == "" { // add namespace prefix to AppRevisionID to create a Rancher API style ID appRevisionID := strings.Replace(item.ID, item.Name, item.AppRevisionID, -1) appRevision, err := c.ProjectClient.AppRevision.ByID(appRevisionID) if err != nil { return err } if appRevision.Status != nil { appTemplateFiles = appRevision.Status.Files } } parsedInfo, err := parseTemplateInfo(appExternalID, appTemplateFiles) if err != nil { return err } appData := &AppData{ ID: item.ID, App: item, Catalog: parsedInfo["catalog"], Template: parsedInfo["template"], Version: parsedInfo["version"], } writer.Write(appData) } return writer.Err() } func parseTemplateInfo(appExternalID string, appTemplateFiles map[string]string) (map[string]string, error) { if appExternalID != "" { parsedExternal, parseErr := parseExternalID(appExternalID) if parseErr != nil { return nil, errors.Wrap(parseErr, "failed to parse ExternalID from app") } return parsedExternal, nil } for fileName, fileContent := range appTemplateFiles { if strings.HasSuffix(fileName, "/Chart.yaml") || strings.HasSuffix(fileName, "/Chart.yml") { content, decodeErr := base64.StdEncoding.DecodeString(fileContent) if decodeErr != nil { return nil, errors.Wrap(decodeErr, "failed to decode Chart.yaml from app") } version := &chartVersion{} unmarshalErr := yaml.Unmarshal(content, version) if unmarshalErr != nil { return nil, errors.Wrap(unmarshalErr, "failed to parse Chart.yaml from app") } return map[string]string{ "catalog": "local directory", "template": version.Name, "version": version.Version, }, nil } } return nil, errors.New("can't parse info from app") } func appDelete(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } for _, arg := range ctx.Args() { resource, err := Lookup(c, arg, "app") if err != nil { return err } app, err := c.ProjectClient.App.ByID(resource.ID) if err != nil { return err } err = c.ProjectClient.App.Delete(app) if err != nil { return err } } return nil } func appUpgrade(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } if ctx.Bool("show-versions") { return outputVersions(ctx, c) } if ctx.NArg() < 2 { return cli.ShowSubcommandHelp(ctx) } appName := ctx.Args().First() appVersionOrLocalTemplatePath := ctx.Args().Get(1) resource, err := Lookup(c, appName, "app") if err != nil { return err } app, err := c.ProjectClient.App.ByID(resource.ID) if err != nil { return err } answers := app.Answers answersSetString := app.AnswersSetString values := app.ValuesYaml answers, answersSetString, err = processAnswerUpdates(ctx, answers, answersSetString) if err != nil { return err } values, err = processValueUpgrades(ctx, values) if err != nil { return err } force := ctx.Bool("force") au := &projectClient.AppUpgradeConfig{ Answers: answers, AnswersSetString: answersSetString, ForceUpgrade: force, ValuesYaml: values, } if resolveTemplatePath(appVersionOrLocalTemplatePath) { // if it is a path, upgrade install charts locally localTemplatePath := appVersionOrLocalTemplatePath _, files, err := walkTemplateDirectory(localTemplatePath) if err != nil { return err } au.Files = files } else { appVersion := appVersionOrLocalTemplatePath externalID, err := updateExternalIDVersion(app.ExternalID, appVersion) if err != nil { return err } filter := defaultListOpts(ctx) filter.Filters["externalId"] = externalID template, err := c.ManagementClient.TemplateVersion.List(filter) if err != nil { return err } if len(template.Data) == 0 { return fmt.Errorf("version %s is not valid", appVersion) } au.ExternalID = template.Data[0].ExternalID } return c.ProjectClient.App.ActionUpgrade(app, au) } func updateExternalIDVersion(externalID string, version string) (string, error) { u, err := url.Parse(externalID) if err != nil { return "", err } oldVersionQuery := fmt.Sprintf("version=%s", u.Query().Get("version")) newVersionQuery := fmt.Sprintf("version=%s", version) return strings.Replace(externalID, oldVersionQuery, newVersionQuery, 1), nil } func appRollback(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } if ctx.Bool("show-revisions") { return outputRevisions(ctx, c) } if ctx.NArg() < 2 { return cli.ShowSubcommandHelp(ctx) } force := ctx.Bool("force") resource, err := Lookup(c, ctx.Args().First(), "app") if err != nil { return err } app, err := c.ProjectClient.App.ByID(resource.ID) if err != nil { return err } revisionResource, err := Lookup(c, ctx.Args().Get(1), "appRevision") if err != nil { return err } revision, err := c.ProjectClient.AppRevision.ByID(revisionResource.ID) if err != nil { return err } rr := &projectClient.RollbackRevision{ ForceUpgrade: force, RevisionID: revision.Name, } return c.ProjectClient.App.ActionRollback(app, rr) } func templateLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } filter := defaultListOpts(ctx) if ctx.String("app") != "" { resource, err := Lookup(c, ctx.String("app"), "app") if err != nil { return err } filter.Filters["appId"] = resource.ID } collection, err := c.ManagementClient.Template.List(filter) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Template.Name"}, {"CATEGORY", "Category"}, }, ctx) defer writer.Close() for _, item := range collection.Data { writer.Write(&TemplateData{ ID: item.ID, Template: item, Category: strings.Join(item.Categories, ","), }) } return writer.Err() } func templateShow(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), "template") if err != nil { return err } template, err := getFilteredTemplate(ctx, c, resource.ID) if err != nil { return err } sortedVersions, err := sortTemplateVersions(template) if err != nil { return err } if len(sortedVersions) == 0 { fmt.Println("No app versions available to install for this version of Rancher server") } for _, version := range sortedVersions { fmt.Println(version) } return nil } func templateInstall(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } templateName := ctx.Args().First() appName := ctx.Args().Get(1) c, err := GetClient(ctx) if err != nil { return err } app := &projectClient.App{ Name: appName, } if resolveTemplatePath(templateName) { // if it is a path, install charts locally chartName, files, err := walkTemplateDirectory(templateName) if err != nil { return err } answers, answersSetString, err := processAnswerInstall(ctx, nil, nil, nil, false, false) if err != nil { return err } values, err := processValueInstall(ctx, nil, "") if err != nil { return err } app.Files = files app.Answers = answers app.AnswersSetString = answersSetString app.ValuesYaml = values namespace := ctx.String("namespace") if namespace == "" { namespace = chartName + "-" + RandomLetters(5) } err = createNamespace(c, namespace) if err != nil { return err } app.TargetNamespace = namespace } else { resource, err := Lookup(c, templateName, "template") if err != nil { return err } template, err := getFilteredTemplate(ctx, c, resource.ID) if err != nil { return err } latestVersion, err := getTemplateLatestVersion(template) if err != nil { return err } templateVersionID := templateVersionIDFromVersionLink(template.VersionLinks[latestVersion]) userVersion := ctx.String("version") if userVersion != "" { if link, ok := template.VersionLinks[userVersion]; ok { templateVersionID = templateVersionIDFromVersionLink(link) } else { return fmt.Errorf( "version %s for template %s is invalid, run 'rancher app show-template %s' for a list of versions", userVersion, templateName, templateName, ) } } templateVersion, err := c.ManagementClient.TemplateVersion.ByID(templateVersionID) if err != nil { return err } interactive := !ctx.Bool("no-prompt") answers, answersSetString, err := processAnswerInstall(ctx, templateVersion, nil, nil, interactive, false) if err != nil { return err } values, err := processValueInstall(ctx, templateVersion, "") if err != nil { return err } namespace := ctx.String("namespace") if namespace == "" { namespace = template.Name + "-" + RandomLetters(5) } err = createNamespace(c, namespace) if err != nil { return err } app.Answers = answers app.AnswersSetString = answersSetString app.ValuesYaml = values app.ExternalID = templateVersion.ExternalID app.TargetNamespace = namespace } app.Wait = ctx.Bool("helm-wait") app.Timeout = ctx.Int64("helm-timeout") madeApp, err := c.ProjectClient.App.Create(app) if err != nil { return err } fmt.Printf("run \"app show-notes %s\" to view app notes once app is ready\n", madeApp.Name) return nil } // appNotes prints notes from app's notes.txt file func appNotes(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } if ctx.NArg() < 1 { return cli.ShowSubcommandHelp(ctx) } resource, err := Lookup(c, ctx.Args().First(), "app") if err != nil { return err } app, err := c.ProjectClient.App.ByID(resource.ID) if err != nil { return err } if len(app.Notes) > 0 { fmt.Println(app.Notes) } else { fmt.Println("no notes to print") } return nil } func resolveTemplatePath(templateName string) bool { return templateName == "." || strings.Contains(templateName, "\\\\") || strings.Contains(templateName, "/") } func walkTemplateDirectory(templatePath string) (string, map[string]string, error) { templateAbsPath, parsedErr := filepath.Abs(templatePath) if parsedErr != nil { return "", nil, parsedErr } if _, statErr := os.Stat(templateAbsPath); statErr != nil { return "", nil, statErr } var ( chartName string files = make(map[string]string) err error ) err = filepath.Walk(templateAbsPath, func(path string, info os.FileInfo, err error) error { if err != nil { return err } if info.IsDir() { return nil } if !strings.EqualFold(info.Name(), "Chart.yaml") { return nil } version := &chartVersion{} content, err := os.ReadFile(path) if err != nil { return err } rootDir := filepath.Dir(path) if err := yaml.Unmarshal(content, version); err != nil { return err } chartName = version.Name err = filepath.Walk(rootDir, func(path string, info os.FileInfo, err error) error { if err != nil { return err } if info.IsDir() { return nil } content, err := os.ReadFile(path) if err != nil { return err } if len(content) > 0 { key := filepath.Join(chartName, strings.TrimPrefix(path, rootDir+"/")) files[key] = base64.StdEncoding.EncodeToString(content) } return nil }) if err != nil { return err } return filepath.SkipDir }) return chartName, files, err } func showApp(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } err = outputRevisions(ctx, c) if err != nil { return err } fmt.Println() err = outputVersions(ctx, c) if err != nil { return err } return nil } func outputVersions(ctx *cli.Context, c *cliclient.MasterClient) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } resource, err := Lookup(c, ctx.Args().First(), "app") if err != nil { return err } app, err := c.ProjectClient.App.ByID(resource.ID) if err != nil { return err } externalID := app.ExternalID if externalID == "" { // local folder app doesn't show any version information return nil } externalInfo, err := parseExternalID(externalID) if err != nil { return err } template, err := getFilteredTemplate(ctx, c, "cattle-global-data:"+externalInfo["catalog"]+"-"+externalInfo["template"]) if err != nil { return err } sortedVersions, err := sortTemplateVersions(template) if err != nil { return err } if len(sortedVersions) == 0 { fmt.Println("No app versions available to install for this version of Rancher server") return nil } writer := NewTableWriter([][]string{ {"CURRENT", "Current"}, {"VERSION", "Version"}, }, ctx) defer writer.Close() for _, version := range sortedVersions { var current string if version.String() == externalInfo["version"] { current = "*" } writer.Write(&VersionData{ Current: current, Version: version.String(), }) } return writer.Err() } func outputRevisions(ctx *cli.Context, c *cliclient.MasterClient) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } resource, err := Lookup(c, ctx.Args().First(), "app") if err != nil { return err } app, err := c.ProjectClient.App.ByID(resource.ID) if err != nil { return err } revisions := &projectClient.AppRevisionCollection{} err = c.ProjectClient.GetLink(*resource, "revision", revisions) if err != nil { return err } var sorted revSlice for _, rev := range revisions.Data { parsedTime, err := time.Parse(time.RFC3339, rev.Created) if err != nil { return err } parsedInfo, err := parseTemplateInfo(rev.Status.ExternalID, rev.Status.Files) if err != nil { return err } reversionData := revision{ Name: rev.Name, Created: parsedTime, Catalog: parsedInfo["catalog"], Template: parsedInfo["template"], Version: parsedInfo["version"], } sorted = append(sorted, reversionData) } sort.Sort(sorted) writer := NewTableWriter([][]string{ {"CURRENT", "Current"}, {"REVISION", "Name"}, {"CATALOG", "Catalog"}, {"TEMPLATE", "Template"}, {"VERSION", "Version"}, {"CREATED", "Human"}, }, ctx) defer writer.Close() for _, rev := range sorted { if rev.Name == app.AppRevisionID { rev.Current = "*" } rev.Human = rev.Created.Format("02 Jan 2006 15:04:05 MST") writer.Write(rev) } return writer.Err() } func templateVersionIDFromVersionLink(s string) string { pieces := strings.Split(s, "/") return pieces[len(pieces)-1] } // parseExternalID gives back a map with the keys catalog, template and version func parseExternalID(e string) (map[string]string, error) { parsed := make(map[string]string) u, err := url.Parse(e) if err != nil { return parsed, err } q := u.Query() for key, value := range q { if len(value) > 0 { parsed[key] = value[0] } } return parsed, nil } // getFilteredTemplate uses the rancherVersion in the template request to get the // filtered template with incompatable versions dropped func getFilteredTemplate(ctx *cli.Context, c *cliclient.MasterClient, templateID string) (*managementClient.Template, error) { ver, err := getRancherServerVersion(c) if err != nil { return nil, err } filter := defaultListOpts(ctx) filter.Filters["id"] = templateID filter.Filters["rancherVersion"] = ver template, err := c.ManagementClient.Template.List(filter) if err != nil { return nil, err } if len(template.Data) == 0 { return nil, fmt.Errorf("template %v not found", templateID) } return &template.Data[0], nil } // getTemplateLatestVersion returns the newest version of the template func getTemplateLatestVersion(template *managementClient.Template) (string, error) { if len(template.VersionLinks) == 0 { return "", errors.New("no versions found for this template (the chart you are trying to install may be intentionally hidden or deprecated for your Rancher version)") } sorted, err := sortTemplateVersions(template) if err != nil { return "", err } return sorted[len(sorted)-1].String(), nil } func sortTemplateVersions(template *managementClient.Template) ([]*gover.Version, error) { var versions []*gover.Version for key := range template.VersionLinks { v, err := gover.NewVersion(key) if err != nil { return nil, err } versions = append(versions, v) } sort.Sort(gover.Collection(versions)) return versions, nil } // createNamespace checks if a namespace exists and creates it if needed func createNamespace(c *cliclient.MasterClient, n string) error { filter := defaultListOpts(nil) filter.Filters["name"] = n namespaces, err := c.ClusterClient.Namespace.List(filter) if err != nil { return err } if len(namespaces.Data) == 0 { newNamespace := &clusterClient.Namespace{ Name: n, ProjectID: c.UserConfig.Project, } ns, err := c.ClusterClient.Namespace.Create(newNamespace) if err != nil { return err } nsID := ns.ID startTime := time.Now() for { logrus.Debugf("Namespace create wait - Name: %s, State: %s, Transitioning: %s", ns.Name, ns.State, ns.Transitioning) if time.Since(startTime)/time.Second > 30 { return fmt.Errorf("timed out waiting for new namespace %s", ns.Name) } ns, err = c.ClusterClient.Namespace.ByID(nsID) if err != nil { if e, ok := err.(*clientbase.APIError); ok && e.StatusCode == http.StatusForbidden { //the new namespace is created successfully but cannot be got when RBAC rules are not ready. continue } return err } if ns.State == "active" { break } time.Sleep(500 * time.Millisecond) } } else { if namespaces.Data[0].ProjectID != c.UserConfig.Project { return fmt.Errorf("namespace %s already exists", n) } } return nil } // processValueInstall creates a map of the values file and fills in missing entries with defaults func processValueInstall(ctx *cli.Context, tv *managementClient.TemplateVersion, existingValues string) (string, error) { values, err := processValues(ctx, existingValues) if err != nil { return existingValues, err } // add default values if entries missing from map err = fillInDefaultAnswers(tv, values) if err != nil { return existingValues, err } // change map back into string to be consistent with ui existingValues, err = parseMapToYamlString(values) if err != nil { return existingValues, err } return existingValues, nil } // processValueUpgrades creates map from existing values and applies updates func processValueUpgrades(ctx *cli.Context, existingValues string) (string, error) { values, err := processValues(ctx, existingValues) if err != nil { return existingValues, err } // change map back into string to be consistent with ui existingValues, err = parseMapToYamlString(values) if err != nil { return existingValues, err } return existingValues, nil } // processValues creates a map of the values file func processValues(ctx *cli.Context, existingValues string) (map[string]interface{}, error) { var err error values := make(map[string]interface{}) if existingValues != "" { // parse values into map to ensure previous values are considered on update values, err = createValuesMap([]byte(existingValues)) if err != nil { return values, err } } if ctx.String("values") != "" { // if values file passed in, overwrite defaults with new key value pair values, err = parseFile(ctx.String("values")) if err != nil { return values, err } } return values, nil } // processAnswerInstall adds answers to given map, and prompts users to answers chart questions if interactive is true func processAnswerInstall( ctx *cli.Context, tv *managementClient.TemplateVersion, answers, answersSetString map[string]string, interactive bool, multicluster bool, ) (map[string]string, map[string]string, error) { var err error answers, answersSetString, err = processAnswerUpdates(ctx, answers, answersSetString) if err != nil { return answers, answersSetString, err } // interactive occurs before adding defaults to ensure all questions are asked if interactive { // answers to questions will be added to map err := askQuestions(tv, answers) if err != nil { return answers, answersSetString, err } } if multicluster && !interactive { // add default values if answers missing from map err = fillInDefaultAnswersStringMap(tv, answers) if err != nil { return answers, answersSetString, err } } return answers, answersSetString, nil } func processAnswerUpdates(ctx *cli.Context, answers, answersSetString map[string]string) (map[string]string, map[string]string, error) { logrus.Println("ok") if answers == nil || ctx.Bool("reset") { // this would not be possible without returning a map answers = make(map[string]string) } if answersSetString == nil || ctx.Bool("reset") { // this would not be possible without returning a map answersSetString = make(map[string]string) } if ctx.String("answers") != "" { err := parseAnswersFile(ctx.String("answers"), answers) if err != nil { return answers, answersSetString, err } } for _, answer := range ctx.StringSlice("set") { parts := strings.SplitN(answer, "=", 2) if len(parts) == 2 { answers[parts[0]] = parts[1] } } for _, answer := range ctx.StringSlice("set-string") { parts := strings.SplitN(answer, "=", 2) logrus.Printf("%v\n", parts) if len(parts) == 2 { answersSetString[parts[0]] = parts[1] } } return answers, answersSetString, nil } // parseMapToYamlString create yaml string from answers map func parseMapToYamlString(answerMap map[string]interface{}) (string, error) { yamlFileString, err := yaml.Marshal(answerMap) if err != nil { return "", err } return string(yamlFileString), nil } func parseAnswersFile(location string, answers map[string]string) error { holder, err := parseFile(location) if err != nil { return err } for key, value := range holder { switch value.(type) { case nil: answers[key] = "" default: answers[key] = fmt.Sprintf("%v", value) } } return nil } func parseFile(location string) (map[string]interface{}, error) { bytes, err := os.ReadFile(location) if err != nil { return nil, err } return createValuesMap(bytes) } func createValuesMap(bytes []byte) (map[string]interface{}, error) { values := make(map[string]interface{}) if hasPrefix(bytes, []byte("{")) { // this is the check that "readFileReturnJSON" uses to differentiate between JSON and YAML if err := json.Unmarshal(bytes, &values); err != nil { return nil, err } } else { if err := yaml.Unmarshal(bytes, &values); err != nil { return nil, err } } return values, nil } func askQuestions(tv *managementClient.TemplateVersion, answers map[string]string) error { var asked bool var attempts int if tv == nil { return nil } for { attempts++ for _, question := range tv.Questions { if _, ok := answers[question.Variable]; !ok && checkShowIfStringMap(question.ShowIf, answers) { asked = true answers[question.Variable] = askQuestion(question) if checkShowSubquestionIfStringMap(question, answers) { for _, subQuestion := range question.Subquestions { // only ask the question if there is not an answer and it passes the ShowIf check if _, ok := answers[subQuestion.Variable]; !ok && checkShowIfStringMap(subQuestion.ShowIf, answers) { answers[subQuestion.Variable] = askSubQuestion(subQuestion) } } } } } if !asked { return nil } else if attempts >= 10 { return errors.New("attempted questions 10 times") } asked = false } } func askQuestion(q managementClient.Question) string { if len(q.Description) > 0 { fmt.Printf("\nDescription: %s\n", q.Description) } if len(q.Options) > 0 { options := strings.Join(q.Options, ", ") fmt.Printf("Accepted Options: %s\n", options) } fmt.Printf("Name: %s\nVariable Name: %s\nDefault:[%s]\nEnter answer or 'return' for default:", q.Label, q.Variable, q.Default) answer, err := bufio.NewReader(os.Stdin).ReadString('\n') if err != nil { return "" } answer = strings.TrimSpace(answer) if answer == "" { answer = q.Default } return answer } func askSubQuestion(q managementClient.SubQuestion) string { if len(q.Description) > 0 { fmt.Printf("\nDescription: %s\n", q.Description) } if len(q.Options) > 0 { options := strings.Join(q.Options, ", ") fmt.Printf("Accepted Options: %s\n", options) } fmt.Printf("Name: %s\nVariable Name: %s\nDefault:[%s]\nEnter answer or 'return' for default:", q.Label, q.Variable, q.Default) answer, err := bufio.NewReader(os.Stdin).ReadString('\n') if err != nil { return "" } answer = strings.TrimSpace(answer) if answer == "" { answer = q.Default } return answer } // fillInDefaultAnswers parses through questions and creates an answer map with default answers if missing from map func fillInDefaultAnswers(tv *managementClient.TemplateVersion, answers map[string]interface{}) error { if tv == nil { return nil } for _, question := range tv.Questions { if _, ok := answers[question.Variable]; !ok && checkShowIf(question.ShowIf, answers) { answers[question.Variable] = question.Default if checkShowSubquestionIf(question, answers) { for _, subQuestion := range question.Subquestions { // set the sub-question if the showIf check passes if _, ok := answers[subQuestion.Variable]; !ok && checkShowIf(subQuestion.ShowIf, answers) { answers[subQuestion.Variable] = subQuestion.Default } } } } } if answers == nil { return errors.New("could not generate default answers") } return nil } // checkShowIf uses the ShowIf field to determine if a question should be asked // this field comes in the format <key>=<value> where key is a question id and value is the answer func checkShowIf(s string, answers map[string]interface{}) bool { // No ShowIf so always ask the question if len(s) == 0 { return true } pieces := strings.Split(s, "=") if len(pieces) != 2 { return false } //if the key exists and the val matches the expression ask the question if val, ok := answers[pieces[0]]; ok && fmt.Sprintf("%v", val) == pieces[1] { return true } return false } // fillInDefaultAnswersStringMap parses through questions and creates an answer map with default answers if missing from map func fillInDefaultAnswersStringMap(tv *managementClient.TemplateVersion, answers map[string]string) error { if tv == nil { return nil } for _, question := range tv.Questions { if _, ok := answers[question.Variable]; !ok && checkShowIfStringMap(question.ShowIf, answers) { answers[question.Variable] = question.Default if checkShowSubquestionIfStringMap(question, answers) { for _, subQuestion := range question.Subquestions { // set the sub-question if the showIf check passes if _, ok := answers[subQuestion.Variable]; !ok && checkShowIfStringMap(subQuestion.ShowIf, answers) { answers[subQuestion.Variable] = subQuestion.Default } } } } } if answers == nil { return errors.New("could not generate default answers") } return nil } // checkShowIfStringMap uses the ShowIf field to determine if a question should be asked // this field comes in the format <key>=<value> where key is a question id and value is the answer func checkShowIfStringMap(s string, answers map[string]string) bool { // No ShowIf so always ask the question if len(s) == 0 { return true } pieces := strings.Split(s, "=") if len(pieces) != 2 { return false } //if the key exists and the val matches the expression ask the question if val, ok := answers[pieces[0]]; ok && val == pieces[1] { return true } return false } func checkShowSubquestionIf(q managementClient.Question, answers map[string]interface{}) bool { if val, ok := answers[q.Variable]; ok { if fmt.Sprintf("%v", val) == q.ShowSubquestionIf { return true } } return false } func checkShowSubquestionIfStringMap(q managementClient.Question, answers map[string]string) bool { if val, ok := answers[q.Variable]; ok { if val == q.ShowSubquestionIf { return true } } return false } 07070100000010000081A4000000000000000000000001673C86850000038B000000000000000000000000000000000000002300000000rancher-cli-2.10.0/cmd/app_test.gopackage cmd import ( "testing" "github.com/stretchr/testify/assert" ) func TestGetExternalIDInVersion(t *testing.T) { assert := assert.New(t) got, err := updateExternalIDVersion("catalog://?catalog=library&template=cert-manager&version=v0.5.2", "v1.2.3") assert.Nil(err) assert.Equal("catalog://?catalog=library&template=cert-manager&version=v1.2.3", got) got, err = updateExternalIDVersion("catalog://?catalog=c-29wkq/clusterscope&type=clusterCatalog&template=mysql&version=0.3.8", "0.3.9") assert.Nil(err) assert.Equal("catalog://?catalog=c-29wkq/clusterscope&type=clusterCatalog&template=mysql&version=0.3.9", got) got, err = updateExternalIDVersion("catalog://?catalog=p-j9gfw/projectscope&type=projectCatalog&template=grafana&version=0.0.31", "0.0.30") assert.Nil(err) assert.Equal("catalog://?catalog=p-j9gfw/projectscope&type=projectCatalog&template=grafana&version=0.0.30", got) } 07070100000011000081A4000000000000000000000001673C868500001A5E000000000000000000000000000000000000002200000000rancher-cli-2.10.0/cmd/catalog.gopackage cmd import ( "strings" "time" "github.com/pkg/errors" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/sirupsen/logrus" "github.com/urfave/cli" ) const ( addCatalogDescription = ` Add a new catalog to the Rancher server Example: # Add a catalog $ rancher catalog add foo https://my.catalog # Add a catalog and specify the branch to use $ rancher catalog add --branch awesomebranch foo https://my.catalog # Add a catalog and specify the helm version to use. Specify 'v2' for helm 2 and 'v3' for helm 3 $ rancher catalog add --helm-version v3 foo https://my.catalog ` refreshCatalogDescription = ` Refresh a catalog on the Rancher server Example: # Refresh a catalog $ rancher catalog refresh foo # Refresh multiple catalogs $ rancher catalog refresh foo bar baz # Refresh all catalogs $ rancher catalog refresh --all # Refresh is asynchronous unless you specify '--wait' $ rancher catalog refresh --all --wait --wait-timeout=60 # Default wait timeout is 60 seconds, set to 0 to remove the timeout $ rancher catalog refresh --all --wait --wait-timeout=0 ` ) type CatalogData struct { ID string Catalog managementClient.Catalog } func CatalogCommand() cli.Command { catalogLsFlags := []cli.Flag{ formatFlag, quietFlag, cli.BoolFlag{ Name: "verbose,v", Usage: "Include the catalog's state", }, } return cli.Command{ Name: "catalog", Usage: "Operations with catalogs", Action: defaultAction(catalogLs), Flags: catalogLsFlags, Subcommands: []cli.Command{ { Name: "ls", Usage: "List catalogs", Description: "\nList all catalogs in the current Rancher server", ArgsUsage: "None", Action: catalogLs, Flags: catalogLsFlags, }, { Name: "add", Usage: "Add a catalog", Description: addCatalogDescription, ArgsUsage: "[NAME, URL]", Action: catalogAdd, Flags: []cli.Flag{ cli.StringFlag{ Name: "branch", Usage: "Branch from the url to use", Value: "master", }, cli.StringFlag{ Name: "helm-version", Usage: "Version of helm the app(s) in your catalog will use for deployment. Use 'v2' for helm 2 or 'v3' for helm 3", Value: "v2", }, }, }, { Name: "delete", Usage: "Delete a catalog", Description: "\nDelete a catalog from the Rancher server", ArgsUsage: "[CATALOG_NAME/CATALOG_ID]", Action: catalogDelete, }, { Name: "refresh", Usage: "Refresh catalog templates", Description: refreshCatalogDescription, ArgsUsage: "[CATALOG_NAME/CATALOG_ID]...", Action: catalogRefresh, Flags: []cli.Flag{ cli.BoolFlag{ Name: "all", Usage: "Refresh all catalogs", }, cli.BoolFlag{ Name: "wait,w", Usage: "Wait for catalog(s) to become active", }, cli.IntFlag{ Name: "wait-timeout", Usage: "Wait timeout duration in seconds", Value: 60, }, }, }, }, } } func catalogLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } collection, err := c.ManagementClient.Catalog.List(defaultListOpts(ctx)) if err != nil { return err } fields := [][]string{ {"ID", "ID"}, {"NAME", "Catalog.Name"}, {"URL", "Catalog.URL"}, {"BRANCH", "Catalog.Branch"}, {"KIND", "Catalog.Kind"}, {"HELMVERSION", "Catalog.HelmVersion"}, } if ctx.Bool("verbose") { fields = append(fields, []string{"STATE", "Catalog.State"}) } writer := NewTableWriter(fields, ctx) defer writer.Close() for _, item := range collection.Data { writer.Write(&CatalogData{ ID: item.ID, Catalog: item, }) } return writer.Err() } func catalogAdd(ctx *cli.Context) error { if len(ctx.Args()) < 2 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } catalog := &managementClient.Catalog{ Branch: ctx.String("branch"), Name: ctx.Args().First(), Kind: "helm", URL: ctx.Args().Get(1), HelmVersion: strings.ToLower(ctx.String("helm-version")), } _, err = c.ManagementClient.Catalog.Create(catalog) if err != nil { return err } return nil } func catalogDelete(ctx *cli.Context) error { if len(ctx.Args()) < 1 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } for _, arg := range ctx.Args() { resource, err := Lookup(c, arg, "catalog") if err != nil { return err } catalog, err := c.ManagementClient.Catalog.ByID(resource.ID) if err != nil { return err } err = c.ManagementClient.Catalog.Delete(catalog) if err != nil { return err } } return nil } func catalogRefresh(ctx *cli.Context) error { if len(ctx.Args()) < 1 && !ctx.Bool("all") { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } var catalogs []managementClient.Catalog if ctx.Bool("all") { opts := baseListOpts() collection, err := c.ManagementClient.Catalog.List(opts) if err != nil { return err } // save the catalogs in case we need to wait for them to become active catalogs = collection.Data _, err = c.ManagementClient.Catalog.CollectionActionRefresh(collection) if err != nil { return err } } else { for _, arg := range ctx.Args() { resource, err := Lookup(c, arg, "catalog") if err != nil { return err } catalog, err := c.ManagementClient.Catalog.ByID(resource.ID) if err != nil { return err } // collect the refreshing catalogs in case we need to wait for them later catalogs = append(catalogs, *catalog) _, err = c.ManagementClient.Catalog.ActionRefresh(catalog) if err != nil { return err } } } if ctx.Bool("wait") { timeout := time.Duration(ctx.Int("wait-timeout")) * time.Second start := time.Now() logrus.Debugf("catalog: waiting for catalogs to become active (timeout=%v)", timeout) for _, catalog := range catalogs { logrus.Debugf("catalog: waiting for %s to become active", catalog.Name) resource, err := Lookup(c, catalog.Name, "catalog") if err != nil { return err } catalog, err := c.ManagementClient.Catalog.ByID(resource.ID) if err != nil { return err } for catalog.State != "active" { time.Sleep(time.Second) catalog, err = c.ManagementClient.Catalog.ByID(resource.ID) if err != nil { return err } if timeout > 0 && time.Since(start) > timeout { return errors.New("catalog: timed out waiting for refresh") } } } logrus.Debugf("catalog: waited for %v", time.Since(start)) } return nil } 07070100000012000081A4000000000000000000000001673C868500004DD9000000000000000000000000000000000000002200000000rancher-cli-2.10.0/cmd/cluster.gopackage cmd import ( "encoding/json" "errors" "fmt" "slices" "strconv" "strings" "github.com/rancher/cli/cliclient" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/sirupsen/logrus" "github.com/urfave/cli" ) const ( importDescription = ` Imports an existing cluster to be used in rancher by using a generated kubectl command to run in your existing Kubernetes cluster. ` importClusterNotice = "If you get an error about 'certificate signed by unknown authority' " + "because your Rancher installation is running with an untrusted/self-signed SSL " + "certificate, run the command below instead to bypass the certificate check:" ) type ClusterData struct { ID string Current string Cluster managementClient.Cluster Name string Provider string Nodes int64 CPU string RAM string Pods string } func ClusterCommand() cli.Command { return cli.Command{ Name: "clusters", Aliases: []string{"cluster"}, Usage: "Operations on clusters", Action: defaultAction(clusterLs), Subcommands: []cli.Command{ { Name: "ls", Usage: "List clusters", Description: "Lists all clusters", ArgsUsage: "None", Action: clusterLs, Flags: []cli.Flag{ cli.StringFlag{ Name: "format", Usage: "'json', 'yaml' or Custom format: '{{.Cluster.ID}} {{.Cluster.Name}}'", }, quietFlag, }, }, { Name: "create", Usage: "Creates a new empty cluster", Description: "Create a new custom cluster with desired configuration", ArgsUsage: "[NEWCLUSTERNAME...]", Action: clusterCreate, Flags: []cli.Flag{ cli.StringFlag{ Name: "description", Usage: "Description to apply to the cluster", }, cli.BoolTFlag{ Name: "disable-docker-version", Usage: "Allow unsupported versions of docker on the nodes, [default=true]", }, cli.BoolFlag{ Name: "import", Usage: "Mark the cluster for import, this is required if the cluster is going to be used to import an existing k8s cluster", }, cli.StringFlag{ Name: "k8s-version", Usage: "Kubernetes version to use for the cluster, pass in 'list' to see available versions", }, cli.StringFlag{ Name: "network-provider", Usage: "Network provider for the cluster (flannel, canal, calico)", Value: "canal", }, cli.StringFlag{ Name: "rke-config", Usage: "Location of an rke config file to import. Can be JSON or YAML format", }, }, }, { Name: "import", Usage: "Import an existing Kubernetes cluster into a Rancher cluster", Description: importDescription, ArgsUsage: "[CLUSTERID CLUSTERNAME]", Action: clusterImport, Flags: []cli.Flag{ quietFlag, }, }, { Name: "add-node", Usage: "Outputs the docker command needed to add a node to an existing Rancher custom cluster", ArgsUsage: "[CLUSTERID CLUSTERNAME]", Action: clusterAddNode, Flags: []cli.Flag{ cli.StringSliceFlag{ Name: "label", Usage: "Label to apply to a node in the format [name]=[value]", }, cli.BoolFlag{ Name: "etcd", Usage: "Use node for etcd", }, cli.BoolFlag{ Name: "management", Usage: "Use node for management (DEPRECATED, use controlplane instead)", }, cli.BoolFlag{ Name: "controlplane", Usage: "Use node for controlplane", }, cli.BoolFlag{ Name: "worker", Usage: "Use node as a worker", }, quietFlag, }, }, { Name: "delete", Aliases: []string{"rm"}, Usage: "Delete a cluster", ArgsUsage: "[CLUSTERID/CLUSTERNAME...]", Action: clusterDelete, }, { Name: "export", Usage: "Export a cluster", ArgsUsage: "[CLUSTERID/CLUSTERNAME...]", Action: clusterExport, }, { Name: "kubeconfig", Aliases: []string{"kf"}, Usage: "Return the kube config used to access the cluster", ArgsUsage: "[CLUSTERID CLUSTERNAME]", Action: clusterKubeConfig, }, { Name: "add-member-role", Usage: "Add a member to the cluster", Action: addClusterMemberRoles, Description: "Examples:\n #Create the roles of 'nodes-view' and 'projects-view' for a user named 'user1'\n rancher cluster add-member-role user1 nodes-view projects-view\n", ArgsUsage: "[USERNAME, ROLE...]", Flags: []cli.Flag{ cli.StringFlag{ Name: "cluster-id", Usage: "Optional cluster ID to add member role to, defaults to the current context", }, }, }, { Name: "delete-member-role", Usage: "Delete a member from the cluster", Action: deleteClusterMemberRoles, Description: "Examples:\n #Delete the roles of 'nodes-view' and 'projects-view' for a user named 'user1'\n rancher cluster delete-member-role user1 nodes-view projects-view\n", ArgsUsage: "[USERNAME, ROLE...]", Flags: []cli.Flag{ cli.StringFlag{ Name: "cluster-id", Usage: "Optional cluster ID to remove member role from, defaults to the current context", }, }, }, { Name: "list-roles", Usage: "List all available roles for a cluster", Action: listClusterRoles, }, { Name: "list-members", Usage: "List current members of the cluster", Action: listClusterMembers, Flags: []cli.Flag{ cli.StringFlag{ Name: "cluster-id", Usage: "Optional cluster ID to list members for, defaults to the current context", }, }, }, }, } } func clusterLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } collection, err := c.ManagementClient.Cluster.List(defaultListOpts(ctx)) if err != nil { return err } writer := NewTableWriter([][]string{ {"CURRENT", "Current"}, {"ID", "ID"}, {"STATE", "Cluster.State"}, {"NAME", "Name"}, {"PROVIDER", "Provider"}, {"NODES", "Nodes"}, {"CPU", "CPU"}, {"RAM", "RAM"}, {"PODS", "Pods"}, }, ctx) defer writer.Close() for _, item := range collection.Data { var current string if item.ID == c.UserConfig.FocusedCluster() { current = "*" } writer.Write(&ClusterData{ ID: item.ID, Current: current, Cluster: item, Name: getClusterName(&item), Provider: getClusterProvider(item), Nodes: item.NodeCount, CPU: getClusterCPU(item), RAM: getClusterRAM(item), Pods: getClusterPods(item), }) } return writer.Err() } func clusterCreate(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } k8sVersion := ctx.String("k8s-version") if k8sVersion != "" { k8sVersions, err := getClusterK8sOptions(c) if err != nil { return err } if slices.Contains(k8sVersions, k8sVersion) { fmt.Println("Available Kubernetes versions:") for _, val := range k8sVersions { fmt.Println(val) } return nil } } config, err := getClusterConfig(ctx) if err != nil { return err } createdCluster, err := c.ManagementClient.Cluster.Create(config) if err != nil { return err } fmt.Printf("Successfully created cluster %v\n", createdCluster.Name) return nil } func clusterImport(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), "cluster") if err != nil { return err } cluster, err := getClusterByID(c, resource.ID) if err != nil { return err } if cluster.Driver != "" { return errors.New("existing k8s cluster can't be imported into this cluster") } clusterToken, err := getClusterRegToken(ctx, c, cluster.ID) if err != nil { return err } if ctx.Bool("quiet") { fmt.Println(clusterToken.Command) fmt.Println(clusterToken.InsecureCommand) return nil } fmt.Printf("Run the following command in your cluster:\n%s\n\n%s\n%s\n", clusterToken.Command, importClusterNotice, clusterToken.InsecureCommand) return nil } // clusterAddNode prints the command needed to add a node to a cluster func clusterAddNode(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), "cluster") if err != nil { return err } cluster, err := getClusterByID(c, resource.ID) if err != nil { return err } if cluster.Driver == "rancherKubernetesEngine" || cluster.Driver == "" { filter := defaultListOpts(ctx) filter.Filters["clusterId"] = cluster.ID nodePools, err := c.ManagementClient.NodePool.List(filter) if err != nil { return err } if len(nodePools.Data) > 0 { return errors.New("a node can't be manually registered to a cluster utilizing node-pools") } } else { return errors.New("a node can only be manually registered to a custom cluster") } clusterToken, err := getClusterRegToken(ctx, c, cluster.ID) if err != nil { return err } var roleFlags string if ctx.Bool("etcd") { roleFlags = roleFlags + " --etcd" } if ctx.Bool("management") || ctx.Bool("controlplane") { if ctx.Bool("management") && !ctx.Bool("quiet") { logrus.Info("The flag --management is deprecated and replaced by --controlplane") } roleFlags = roleFlags + " --controlplane" } if ctx.Bool("worker") { roleFlags = roleFlags + " --worker" } command := clusterToken.NodeCommand + roleFlags if labels := ctx.StringSlice("label"); labels != nil { for _, label := range labels { command = command + fmt.Sprintf(" --label %v", label) } } if ctx.Bool("quiet") { fmt.Println(command) return nil } fmt.Printf("Run this command on an existing machine already running a "+ "supported version of Docker:\n%v\n", command) return nil } func clusterDelete(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } for _, cluster := range ctx.Args() { resource, err := Lookup(c, cluster, "cluster") if err != nil { return err } cluster, err := getClusterByID(c, resource.ID) if err != nil { return err } err = c.ManagementClient.Cluster.Delete(cluster) if err != nil { return err } } return nil } func clusterExport(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), "cluster") if err != nil { return err } cluster, err := getClusterByID(c, resource.ID) if err != nil { return err } if _, ok := cluster.Actions["exportYaml"]; !ok { return errors.New("cluster does not support being exported") } export, err := c.ManagementClient.Cluster.ActionExportYaml(cluster) if err != nil { return err } fmt.Println(export.YAMLOutput) return nil } func clusterKubeConfig(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), "cluster") if err != nil { return err } cluster, err := getClusterByID(c, resource.ID) if err != nil { return err } config, err := c.ManagementClient.Cluster.ActionGenerateKubeconfig(cluster) if err != nil { return err } fmt.Println(config.Config) return nil } func addClusterMemberRoles(ctx *cli.Context) error { if len(ctx.Args()) < 2 { return cli.ShowSubcommandHelp(ctx) } memberName := ctx.Args().First() roles := ctx.Args()[1:] c, err := GetClient(ctx) if err != nil { return err } member, err := searchForMember(ctx, c, memberName) if err != nil { return err } clusterID := c.UserConfig.FocusedCluster() if ctx.String("cluster-id") != "" { clusterID = ctx.String("cluster-id") } for _, role := range roles { rtb := managementClient.ClusterRoleTemplateBinding{ ClusterID: clusterID, RoleTemplateID: role, UserPrincipalID: member.ID, } if member.PrincipalType == "user" { rtb.UserPrincipalID = member.ID } else { rtb.GroupPrincipalID = member.ID } _, err = c.ManagementClient.ClusterRoleTemplateBinding.Create(&rtb) if err != nil { return err } } return nil } func deleteClusterMemberRoles(ctx *cli.Context) error { if len(ctx.Args()) < 2 { return cli.ShowSubcommandHelp(ctx) } memberName := ctx.Args().First() roles := ctx.Args()[1:] c, err := GetClient(ctx) if err != nil { return err } member, err := searchForMember(ctx, c, memberName) if err != nil { return err } clusterID := c.UserConfig.FocusedCluster() if ctx.String("cluster-id") != "" { clusterID = ctx.String("cluster-id") } for _, role := range roles { filter := defaultListOpts(ctx) filter.Filters["clusterId"] = clusterID filter.Filters["roleTemplateId"] = role if member.PrincipalType == "user" { filter.Filters["userPrincipalId"] = member.ID } else { filter.Filters["groupPrincipalId"] = member.ID } bindings, err := c.ManagementClient.ClusterRoleTemplateBinding.List(filter) if err != nil { return err } for _, binding := range bindings.Data { err = c.ManagementClient.ClusterRoleTemplateBinding.Delete(&binding) if err != nil { return err } } } return nil } func listClusterRoles(ctx *cli.Context) error { return listRoles(ctx, "cluster") } func listClusterMembers(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } clusterID := c.UserConfig.FocusedCluster() if ctx.String("cluster-id") != "" { clusterID = ctx.String("cluster-id") } filter := defaultListOpts(ctx) filter.Filters["clusterId"] = clusterID bindings, err := c.ManagementClient.ClusterRoleTemplateBinding.List(filter) if err != nil { return err } userFilter := defaultListOpts(ctx) users, err := c.ManagementClient.User.List(userFilter) if err != nil { return err } userMap := usersToNameMapping(users.Data) var b []RoleTemplateBinding for _, binding := range bindings.Data { parsedTime, err := createdTimetoHuman(binding.Created) if err != nil { return err } b = append(b, RoleTemplateBinding{ ID: binding.ID, User: userMap[binding.UserID], Role: binding.RoleTemplateID, Created: parsedTime, }) } return listRoleTemplateBindings(ctx, b) } // getClusterRegToken will return an existing token or create one if none exist func getClusterRegToken( ctx *cli.Context, c *cliclient.MasterClient, clusterID string, ) (managementClient.ClusterRegistrationToken, error) { tokenOpts := defaultListOpts(ctx) tokenOpts.Filters["clusterId"] = clusterID clusterTokenCollection, err := c.ManagementClient.ClusterRegistrationToken.List(tokenOpts) if err != nil { return managementClient.ClusterRegistrationToken{}, err } if len(clusterTokenCollection.Data) == 0 { crt := &managementClient.ClusterRegistrationToken{ ClusterID: clusterID, } clusterToken, err := c.ManagementClient.ClusterRegistrationToken.Create(crt) if err != nil { return managementClient.ClusterRegistrationToken{}, err } return *clusterToken, nil } return clusterTokenCollection.Data[0], nil } func getClusterByID( c *cliclient.MasterClient, clusterID string, ) (*managementClient.Cluster, error) { cluster, err := c.ManagementClient.Cluster.ByID(clusterID) if err != nil { return nil, fmt.Errorf("no cluster found with the ID [%s], run "+ "`rancher clusters` to see available clusters: %s", clusterID, err) } return cluster, nil } func getClusterProvider(cluster managementClient.Cluster) string { switch cluster.Driver { case "imported": switch cluster.Provider { case "rke2": return "RKE2" case "k3s": return "K3S" default: return "Imported" } case "k3s": return "K3S" case "rke2": return "RKE2" case "rancherKubernetesEngine": return "Rancher Kubernetes Engine" case "azureKubernetesService", "AKS": return "Azure Kubernetes Service" case "googleKubernetesEngine", "GKE": return "Google Kubernetes Engine" case "EKS": return "Elastic Kubernetes Service" default: return "Unknown" } } func getClusterCPU(cluster managementClient.Cluster) string { req := parseResourceString(cluster.Requested["cpu"]) alloc := parseResourceString(cluster.Allocatable["cpu"]) return req + "/" + alloc } func getClusterRAM(cluster managementClient.Cluster) string { req := parseResourceString(cluster.Requested["memory"]) alloc := parseResourceString(cluster.Allocatable["memory"]) return req + "/" + alloc + " GB" } // parseResourceString returns GB for Ki and Mi and CPU cores from 'm' func parseResourceString(mem string) string { if strings.HasSuffix(mem, "Ki") { num, err := strconv.ParseFloat(strings.Replace(mem, "Ki", "", -1), 64) if err != nil { return mem } num = num / 1024 / 1024 return strings.TrimSuffix(fmt.Sprintf("%.2f", num), ".0") } if strings.HasSuffix(mem, "Mi") { num, err := strconv.ParseFloat(strings.Replace(mem, "Mi", "", -1), 64) if err != nil { return mem } num = num / 1024 return strings.TrimSuffix(fmt.Sprintf("%.2f", num), ".0") } if strings.HasSuffix(mem, "m") { num, err := strconv.ParseFloat(strings.Replace(mem, "m", "", -1), 64) if err != nil { return mem } num = num / 1000 return strconv.FormatFloat(num, 'f', 2, 32) } return mem } func getClusterPods(cluster managementClient.Cluster) string { return cluster.Requested["pods"] + "/" + cluster.Allocatable["pods"] } func getClusterK8sOptions(c *cliclient.MasterClient) ([]string, error) { var options []string setting, err := c.ManagementClient.Setting.ByID("k8s-version-to-images") if err != nil { return nil, err } var objmap map[string]*json.RawMessage err = json.Unmarshal([]byte(setting.Value), &objmap) if err != nil { return nil, err } for key := range objmap { options = append(options, key) } return options, nil } func getClusterConfig(ctx *cli.Context) (*managementClient.Cluster, error) { config := managementClient.Cluster{} config.Name = ctx.Args().First() config.Description = ctx.String("description") if !ctx.Bool("import") { config.RancherKubernetesEngineConfig = new(managementClient.RancherKubernetesEngineConfig) ignoreDockerVersion := ctx.BoolT("disable-docker-version") config.RancherKubernetesEngineConfig.IgnoreDockerVersion = &ignoreDockerVersion if ctx.String("k8s-version") != "" { config.RancherKubernetesEngineConfig.Version = ctx.String("k8s-version") } if ctx.String("network-provider") != "" { config.RancherKubernetesEngineConfig.Network = &managementClient.NetworkConfig{ Plugin: ctx.String("network-provider"), } } if ctx.String("rke-config") != "" { bytes, err := readFileReturnJSON(ctx.String("rke-config")) if err != nil { return nil, err } var jsonObject map[string]interface{} if err = json.Unmarshal(bytes, &jsonObject); err != nil { return nil, err } // Most values in RancherKubernetesEngineConfig are defined with struct tags for both JSON and YAML in camelCase. // Changing the tags will be a breaking change. For proper deserialization, we must convert all keys to camelCase. // Note that we ignore kebab-case keys. Users themselves should ensure any relevant keys // (especially top-level keys in `services`, like `kube-api` or `kube-controller`) are camelCase or snake-case in cluster config. convertSnakeCaseKeysToCamelCase(jsonObject) marshalled, err := json.Marshal(jsonObject) if err != nil { return nil, err } if err = json.Unmarshal(marshalled, &config); err != nil { return nil, err } } } return &config, nil } 07070100000013000081A4000000000000000000000001673C86850000419C000000000000000000000000000000000000002100000000rancher-cli-2.10.0/cmd/common.gopackage cmd import ( "bufio" "bytes" "crypto/x509" "encoding/pem" "fmt" "io" "math/rand" "net/url" "os" "os/exec" "path/filepath" "regexp" "strconv" "strings" "syscall" "text/template" "time" "unicode" "github.com/ghodss/yaml" "github.com/pkg/errors" "github.com/rancher/cli/cliclient" "github.com/rancher/cli/config" "github.com/rancher/norman/clientbase" ntypes "github.com/rancher/norman/types" "github.com/rancher/norman/types/convert" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/sirupsen/logrus" "github.com/urfave/cli" "golang.org/x/text/cases" "golang.org/x/text/language" "k8s.io/client-go/tools/clientcmd/api" ) const ( letters = "abcdefghijklmnopqrstuvwxyz0123456789" cfgFile = "cli2.json" kubeConfigKeyFormat = "%s-%s" ) var ( // ManagementResourceTypes lists the types we use the management client for ManagementResourceTypes = []string{"cluster", "node", "project"} // ProjectResourceTypes lists the types we use the cluster client for ProjectResourceTypes = []string{"secret", "namespacedSecret", "workload"} // ClusterResourceTypes lists the types we use the project client for ClusterResourceTypes = []string{"persistentVolume", "storageClass", "namespace"} formatFlag = cli.StringFlag{ Name: "format,o", Usage: "'json', 'yaml' or custom format", } quietFlag = cli.BoolFlag{ Name: "quiet,q", Usage: "Only display IDs or suppress help text", } ) type MemberData struct { Name string MemberType string AccessType string } type RoleTemplate struct { ID string Name string Description string } type RoleTemplateBinding struct { ID string User string Role string Created string } func listAllRoles() []string { roles := []string{} roles = append(roles, ManagementResourceTypes...) roles = append(roles, ProjectResourceTypes...) roles = append(roles, ClusterResourceTypes...) return roles } func listRoles(ctx *cli.Context, context string) error { c, err := GetClient(ctx) if err != nil { return err } filter := defaultListOpts(ctx) filter.Filters["hidden"] = false filter.Filters["context"] = context templates, err := c.ManagementClient.RoleTemplate.List(filter) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Name"}, {"DESCRIPTION", "Description"}, }, ctx) defer writer.Close() for _, item := range templates.Data { writer.Write(&RoleTemplate{ ID: item.ID, Name: item.Name, Description: item.Description, }) } return writer.Err() } func listRoleTemplateBindings(ctx *cli.Context, b []RoleTemplateBinding) error { writer := NewTableWriter([][]string{ {"BINDING-ID", "ID"}, {"USER", "User"}, {"ROLE", "Role"}, {"CREATED", "Created"}, }, ctx) defer writer.Close() for _, item := range b { writer.Write(&RoleTemplateBinding{ ID: item.ID, User: item.User, Role: item.Role, Created: item.Created, }) } return writer.Err() } func getKubeConfigForUser(ctx *cli.Context, user string) (*api.Config, error) { cf, err := loadConfig(ctx) if err != nil { return nil, err } focusedServer, err := cf.FocusedServer() if err != nil { return nil, err } kubeConfig := focusedServer.KubeConfigs[fmt.Sprintf(kubeConfigKeyFormat, user, focusedServer.FocusedCluster())] return kubeConfig, nil } func setKubeConfigForUser(ctx *cli.Context, user string, kubeConfig *api.Config) error { cf, err := loadConfig(ctx) if err != nil { return err } focusedServer, err := cf.FocusedServer() if err != nil { return err } if focusedServer.KubeConfigs == nil { focusedServer.KubeConfigs = make(map[string]*api.Config) } focusedServer.KubeConfigs[fmt.Sprintf(kubeConfigKeyFormat, user, focusedServer.FocusedCluster())] = kubeConfig return cf.Write() } func usersToNameMapping(u []managementClient.User) map[string]string { userMapping := make(map[string]string) for _, user := range u { if user.Name != "" { userMapping[user.ID] = user.Name } else { userMapping[user.ID] = user.Username } } return userMapping } func searchForMember(ctx *cli.Context, c *cliclient.MasterClient, name string) (*managementClient.Principal, error) { filter := defaultListOpts(ctx) filter.Filters["ID"] = "thisisnotathingIhope" // A collection is needed to get the action link pCollection, err := c.ManagementClient.Principal.List(filter) if err != nil { return nil, err } p := managementClient.SearchPrincipalsInput{ Name: name, } results, err := c.ManagementClient.Principal.CollectionActionSearch(pCollection, &p) if err != nil { return nil, err } dataLength := len(results.Data) switch { case dataLength == 0: return nil, fmt.Errorf("no results found for %q", name) case dataLength == 1: return &results.Data[0], nil case dataLength >= 10: results.Data = results.Data[:10] } var names []string for _, person := range results.Data { names = append(names, person.Name+fmt.Sprintf(" (%s)", person.PrincipalType)) } selection := selectFromList("Multiple results found:", names) return &results.Data[selection], nil } func getRancherServerVersion(c *cliclient.MasterClient) (string, error) { setting, err := c.ManagementClient.Setting.ByID("server-version") if err != nil { return "", err } return setting.Value, err } func loadAndVerifyCert(path string) (string, error) { caCert, err := os.ReadFile(path) if err != nil { return "", err } return verifyCert(caCert) } func verifyCert(caCert []byte) (string, error) { // replace the escaped version of the line break caCert = bytes.Replace(caCert, []byte(`\n`), []byte("\n"), -1) block, _ := pem.Decode(caCert) if nil == block { return "", errors.New("No cert was found") } parsedCert, err := x509.ParseCertificate(block.Bytes) if err != nil { return "", err } if !parsedCert.IsCA { return "", errors.New("CACerts is not valid") } return string(caCert), nil } func GetConfigPath(ctx *cli.Context) string { // path will always be set by the global flag default path := ctx.GlobalString("config") return filepath.Join(path, cfgFile) } func loadConfig(ctx *cli.Context) (config.Config, error) { path := GetConfigPath(ctx) return config.LoadFromPath(path) } func lookupConfig(ctx *cli.Context) (*config.ServerConfig, error) { cf, err := loadConfig(ctx) if err != nil { return nil, err } cs, err := cf.FocusedServer() if err != nil { return nil, err } return cs, nil } func GetClient(ctx *cli.Context) (*cliclient.MasterClient, error) { cf, err := lookupConfig(ctx) if err != nil { return nil, err } mc, err := cliclient.NewMasterClient(cf) if err != nil { return nil, err } return mc, nil } // GetResourceType maps an incoming resource type to a valid one from the schema func GetResourceType(c *cliclient.MasterClient, resource string) (string, error) { if c.ManagementClient != nil { for key := range c.ManagementClient.APIBaseClient.Types { if strings.EqualFold(key, resource) { return key, nil } } } if c.ProjectClient != nil { for key := range c.ProjectClient.APIBaseClient.Types { if strings.EqualFold(key, resource) { return key, nil } } } if c.ClusterClient != nil { for key := range c.ClusterClient.APIBaseClient.Types { if strings.EqualFold(key, resource) { return key, nil } } } if c.CAPIClient != nil { for key := range c.CAPIClient.APIBaseClient.Types { lowerKey := strings.ToLower(key) if strings.HasPrefix(lowerKey, "cluster.x-k8s.io") && lowerKey == strings.ToLower(resource) { return key, nil } } } return "", fmt.Errorf("unknown resource type: %s", resource) } func Lookup(c *cliclient.MasterClient, name string, types ...string) (*ntypes.Resource, error) { var byName *ntypes.Resource for _, schemaType := range types { rt, err := GetResourceType(c, schemaType) if err != nil { logrus.Debugf("Error GetResourceType: %v", err) return nil, err } var schemaClient clientbase.APIBaseClientInterface // the schemaType dictates which client we need to use if c.CAPIClient != nil { if strings.HasPrefix(rt, "cluster.x-k8s.io") { schemaClient = c.CAPIClient } } if c.ManagementClient != nil { if _, ok := c.ManagementClient.APIBaseClient.Types[rt]; ok { schemaClient = c.ManagementClient } } if c.ProjectClient != nil { if _, ok := c.ProjectClient.APIBaseClient.Types[rt]; ok { schemaClient = c.ProjectClient } } if c.ClusterClient != nil { if _, ok := c.ClusterClient.APIBaseClient.Types[rt]; ok { schemaClient = c.ClusterClient } } // Attempt to get the resource by ID var resource ntypes.Resource if err := schemaClient.ByID(schemaType, name, &resource); !clientbase.IsNotFound(err) && err != nil { logrus.Debugf("Error schemaClient.ByID: %v", err) return nil, err } else if err == nil && resource.ID == name { return &resource, nil } // Resource was not found assuming the ID, check if it's the name of a resource var collection ntypes.ResourceCollection listOpts := &ntypes.ListOpts{ Filters: map[string]interface{}{ "name": name, "removed_null": 1, }, } if err := schemaClient.List(schemaType, listOpts, &collection); !clientbase.IsNotFound(err) && err != nil { logrus.Debugf("Error schemaClient.List: %v", err) return nil, err } if len(collection.Data) > 1 { ids := []string{} for _, data := range collection.Data { ids = append(ids, data.ID) } return nil, fmt.Errorf("Multiple resources of type %s found for name %s: %v", schemaType, name, ids) } // No matches for this schemaType, try the next one if len(collection.Data) == 0 { continue } if byName != nil { return nil, fmt.Errorf("Multiple resources named %s: %s:%s, %s:%s", name, collection.Data[0].Type, collection.Data[0].ID, byName.Type, byName.ID) } byName = &collection.Data[0] } if byName == nil { return nil, fmt.Errorf("Not found: %s", name) } return byName, nil } // RandomLetters returns a string with random letters of length n func RandomLetters(n int) string { b := make([]byte, n) for i := range b { b[i] = letters[rand.Intn(len(letters))] } return string(b) } func appendTabDelim(buf *bytes.Buffer, value string) { if buf.Len() == 0 { buf.WriteString(value) } else { buf.WriteString("\t") buf.WriteString(value) } } func SimpleFormat(values [][]string) (string, string) { headerBuffer := bytes.Buffer{} valueBuffer := bytes.Buffer{} for _, v := range values { appendTabDelim(&headerBuffer, v[0]) if strings.Contains(v[1], "{{") { appendTabDelim(&valueBuffer, v[1]) } else { appendTabDelim(&valueBuffer, "{{."+v[1]+"}}") } } headerBuffer.WriteString("\n") valueBuffer.WriteString("\n") return headerBuffer.String(), valueBuffer.String() } func defaultAction(fn func(ctx *cli.Context) error) func(ctx *cli.Context) error { return func(ctx *cli.Context) error { if ctx.Bool("help") { return cli.ShowAppHelp(ctx) } return fn(ctx) } } func printTemplate(out io.Writer, templateContent string, obj interface{}) error { funcMap := map[string]interface{}{ "endpoint": FormatEndpoint, "ips": FormatIPAddresses, "json": FormatJSON, } tmpl, err := template.New("").Funcs(funcMap).Parse(templateContent) if err != nil { return err } return tmpl.Execute(out, obj) } func selectFromList(header string, choices []string) int { if header != "" { fmt.Println(header) } reader := bufio.NewReader(os.Stdin) selected := -1 for selected <= 0 || selected > len(choices) { for i, choice := range choices { fmt.Printf("[%d] %s\n", i+1, choice) } fmt.Print("Select: ") text, _ := reader.ReadString('\n') text = strings.TrimSpace(text) num, err := strconv.Atoi(text) if err == nil { selected = num } } return selected - 1 } func processExitCode(err error) error { if exitErr, ok := err.(*exec.ExitError); ok { if status, ok := exitErr.Sys().(syscall.WaitStatus); ok { os.Exit(status.ExitStatus()) } } return err } func SplitOnColon(s string) []string { return strings.Split(s, ":") } func parseClusterAndProjectID(id string) (string, string, error) { // Validate id // Examples: // c-qmpbm:p-mm62v // c-qmpbm:project-mm62v // c-m-j2s7m6lq:p-mm62v // See https://github.com/rancher/rancher/issues/14400 if match, _ := regexp.MatchString("((local)|(c-[[:alnum:]]{5})|(c-m-[[:alnum:]]{8})):(p|project)-[[:alnum:]]{5}", id); match { parts := SplitOnColon(id) return parts[0], parts[1], nil } return "", "", fmt.Errorf("Unable to extract clusterid and projectid from [%s]", id) } // Return a JSON blob of the file at path func readFileReturnJSON(path string) ([]byte, error) { file, err := os.ReadFile(path) if err != nil { return []byte{}, err } // This is probably already JSON if true if hasPrefix(file, []byte("{")) { return file, nil } return yaml.YAMLToJSON(file) } // renameKeys renames the keys in a given map of arbitrary depth with a provided function for string keys. func renameKeys(input map[string]interface{}, f func(string) string) { for k, v := range input { delete(input, k) newKey := f(k) input[newKey] = v if innerMap, ok := v.(map[string]interface{}); ok { renameKeys(innerMap, f) } } } // convertSnakeCaseKeysToCamelCase takes a map and recursively transforms all snake_case keys into camelCase keys. func convertSnakeCaseKeysToCamelCase(input map[string]interface{}) { renameKeys(input, convert.ToJSONKey) } // Return true if the first non-whitespace bytes in buf is prefix. func hasPrefix(buf []byte, prefix []byte) bool { trim := bytes.TrimLeftFunc(buf, unicode.IsSpace) return bytes.HasPrefix(trim, prefix) } // getClusterNames maps cluster ID to name and defaults to ID if name is blank func getClusterNames(ctx *cli.Context, c *cliclient.MasterClient) (map[string]string, error) { clusterNames := make(map[string]string) clusterCollection, err := c.ManagementClient.Cluster.List(defaultListOpts(ctx)) if err != nil { return clusterNames, err } for _, cluster := range clusterCollection.Data { if cluster.Name == "" { clusterNames[cluster.ID] = cluster.ID } else { clusterNames[cluster.ID] = cluster.Name } } return clusterNames, nil } func getClusterName(cluster *managementClient.Cluster) string { if cluster.Name != "" { return cluster.Name } return cluster.ID } func createdTimetoHuman(t string) (string, error) { parsedTime, err := time.Parse(time.RFC3339, t) if err != nil { return "", err } return parsedTime.Format("02 Jan 2006 15:04:05 MST"), nil } func outputMembers(ctx *cli.Context, c *cliclient.MasterClient, members []managementClient.Member) error { writer := NewTableWriter([][]string{ {"NAME", "Name"}, {"MEMBER_TYPE", "MemberType"}, {"ACCESS_TYPE", "AccessType"}, }, ctx) defer writer.Close() for _, m := range members { principalID := m.UserPrincipalID if m.UserPrincipalID == "" { principalID = m.GroupPrincipalID } principal, err := c.ManagementClient.Principal.ByID(url.PathEscape(principalID)) if err != nil { return err } memberType := fmt.Sprintf("%s %s", principal.Provider, principal.PrincipalType) writer.Write(&MemberData{ Name: principal.Name, MemberType: cases.Title(language.Und).String(memberType), AccessType: m.AccessType, }) } return writer.Err() } func addMembersByNames(ctx *cli.Context, c *cliclient.MasterClient, members []managementClient.Member, toAddMembers []string, accessType string) ([]managementClient.Member, error) { for _, name := range toAddMembers { member, err := searchForMember(ctx, c, name) if err != nil { return nil, err } toAddMember := managementClient.Member{ AccessType: accessType, } if member.PrincipalType == "user" { toAddMember.UserPrincipalID = member.ID } else { toAddMember.GroupPrincipalID = member.ID } members = append(members, toAddMember) } return members, nil } func deleteMembersByNames(ctx *cli.Context, c *cliclient.MasterClient, members []managementClient.Member, todeleteMembers []string) ([]managementClient.Member, error) { for _, name := range todeleteMembers { member, err := searchForMember(ctx, c, name) if err != nil { return nil, err } var toKeepMembers []managementClient.Member for _, m := range members { if m.GroupPrincipalID != member.ID && m.UserPrincipalID != member.ID { toKeepMembers = append(toKeepMembers, m) } } members = toKeepMembers } return members, nil } func ConfigDir() (string, error) { homeDir, err := os.UserHomeDir() if err != nil { return "", err } return filepath.Join(homeDir, ".rancher"), nil } 07070100000014000081A4000000000000000000000001673C868500000819000000000000000000000000000000000000002600000000rancher-cli-2.10.0/cmd/common_test.gopackage cmd import ( "testing" "gopkg.in/check.v1" ) // Hook up gocheck into the "go test" runner. func Test(t *testing.T) { check.TestingT(t) } type CommonTestSuite struct { } var _ = check.Suite(&CommonTestSuite{}) func (s *CommonTestSuite) SetUpSuite(c *check.C) { } func (s *CommonTestSuite) TestParseClusterAndProjectID(c *check.C) { testParse(c, "local:p-12345", "local", "p-12345", false) testParse(c, "c-12345:p-12345", "c-12345", "p-12345", false) testParse(c, "cocal:p-12345", "", "", true) testParse(c, "c-123:p-123", "", "", true) testParse(c, "", "", "", true) testParse(c, "c-m-12345678:p-12345", "c-m-12345678", "p-12345", false) testParse(c, "c-m-123:p-12345", "", "", true) } func (s *CommonTestSuite) TestConvertSnakeCaseKeysToCamelCase(c *check.C) { cases := []struct { input map[string]interface{} renamed map[string]interface{} }{ { map[string]interface{}{"foo_bar": "hello"}, map[string]interface{}{"fooBar": "hello"}, }, { map[string]interface{}{"fooBar": "hello"}, map[string]interface{}{"fooBar": "hello"}, }, { map[string]interface{}{"foobar": "hello", "some_key": "valueUnmodified", "bar-baz": "bar-baz"}, map[string]interface{}{"foobar": "hello", "someKey": "valueUnmodified", "bar-baz": "bar-baz"}, }, { map[string]interface{}{"foo_bar": "hello", "backup_config": map[string]interface{}{"hello_world": true}, "config_id": 123}, map[string]interface{}{"fooBar": "hello", "backupConfig": map[string]interface{}{"helloWorld": true}, "configId": 123}, }, } for _, tc := range cases { convertSnakeCaseKeysToCamelCase(tc.input) c.Assert(tc.input, check.DeepEquals, tc.renamed) } } func testParse(c *check.C, testID, expectedCluster, expectedProject string, errorExpected bool) { actualCluster, actualProject, actualErr := parseClusterAndProjectID(testID) c.Assert(actualCluster, check.Equals, expectedCluster) c.Assert(actualProject, check.Equals, expectedProject) if errorExpected { c.Assert(actualErr, check.NotNil) } else { c.Assert(actualErr, check.IsNil) } } 07070100000015000081A4000000000000000000000001673C86850000064C000000000000000000000000000000000000002200000000rancher-cli-2.10.0/cmd/context.gopackage cmd import ( "github.com/rancher/cli/cliclient" "github.com/sirupsen/logrus" "github.com/urfave/cli" ) func ContextCommand() cli.Command { return cli.Command{ Name: "context", Usage: "Operations for the context", Description: `Switch or view context. A context is the server->cluster->project currently in focus. `, Subcommands: []cli.Command{ { Name: "switch", Usage: "Switch to a new context", Description: ` The project arg is optional, if not passed in a list of available projects will be displayed and one can be selected. If only one project is available it will be automatically selected. `, ArgsUsage: "[PROJECT_ID/PROJECT_NAME]", Action: contextSwitch, }, { Name: "current", Usage: "Display the current context", Action: loginContext, }, }, } } func contextSwitch(ctx *cli.Context) error { cf, err := loadConfig(ctx) if err != nil { return err } server, err := cf.FocusedServer() if err != nil { return err } c, err := cliclient.NewManagementClient(server) if err != nil { return err } var projectID string if ctx.NArg() == 0 { projectID, err = getProjectContext(ctx, c) if err != nil { return nil } } else { resource, err := Lookup(c, ctx.Args().First(), "project") if err != nil { return err } projectID = resource.ID } project, err := c.ManagementClient.Project.ByID(projectID) if err != nil { return nil } logrus.Infof("Setting new context to project %s", project.Name) server.Project = project.ID err = cf.Write() if err != nil { return err } return nil } 07070100000016000081A4000000000000000000000001673C8685000003DA000000000000000000000000000000000000002100000000rancher-cli-2.10.0/cmd/format.gopackage cmd import ( "bytes" "encoding/json" "fmt" ) func FormatEndpoint(data interface{}) string { dataSlice, ok := data.([]interface{}) if !ok { return "" } buf := &bytes.Buffer{} for _, value := range dataSlice { dataMap, ok := value.(map[string]interface{}) if !ok { return "" } s := fmt.Sprintf("%v:%v", dataMap["ipAddress"], dataMap["port"]) if buf.Len() == 0 { buf.WriteString(s) } else { buf.WriteString(", ") buf.WriteString(s) } } return buf.String() } func FormatIPAddresses(data interface{}) string { //todo: revisit return "" //ips, ok := data.([]client.IpAddress) //if !ok { // return "" //} // //ipStrings := []string{} //for _, ip := range ips { // if ip.Address != "" { // ipStrings = append(ipStrings, ip.Address) // } //} // //return strings.Join(ipStrings, ", ") } func FormatJSON(data interface{}) (string, error) { bytes, err := json.MarshalIndent(data, "", " ") return string(bytes) + "\n", err } 07070100000017000081A4000000000000000000000001673C868500000745000000000000000000000000000000000000002200000000rancher-cli-2.10.0/cmd/inspect.gopackage cmd import ( "strings" "github.com/urfave/cli" ) func InspectCommand() cli.Command { return cli.Command{ Name: "inspect", Usage: "View details of resources", Description: ` Inspect resources by name or ID in the current context. If the 'type' is not specified inspect will search: ` + strings.Join(listAllRoles(), ", ") + ` Examples: # Specify the type $ rancher inspect --type cluster clusterFoo # No type is specified so defaults are checked $ rancher inspect myvolume # Inspect a project and get the output in yaml format with the projects links $ rancher inspect --type project --format yaml --links projectFoo `, ArgsUsage: "[RESOURCEID RESOURCENAME]", Action: inspectResources, Flags: []cli.Flag{ cli.BoolFlag{ Name: "links", Usage: "Include URLs to actions and links in resource output", }, cli.StringFlag{ Name: "type", Usage: "Specify the type of resource to inspect", }, cli.StringFlag{ Name: "format", Usage: "'json', 'yaml' or Custom format: '{{.kind}}'", Value: "json", }, }, } } func inspectResources(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowCommandHelp(ctx, "inspect") } c, err := GetClient(ctx) if err != nil { return err } t := ctx.String("type") types := []string{} if t != "" { rt, err := GetResourceType(c, t) if err != nil { return err } types = append(types, rt) } else { types = listAllRoles() } resource, err := Lookup(c, ctx.Args().First(), types...) if err != nil { return err } mapResource := map[string]interface{}{} err = c.ByID(resource, &mapResource) if err != nil { return err } if !ctx.Bool("links") { delete(mapResource, "links") delete(mapResource, "actions") } writer := NewTableWriter(nil, ctx) writer.Write(mapResource) writer.Close() return writer.Err() } 07070100000018000081A4000000000000000000000001673C868500000D31000000000000000000000000000000000000002200000000rancher-cli-2.10.0/cmd/kubectl.gopackage cmd import ( "fmt" "os" "os/exec" "strings" "github.com/rancher/norman/clientbase" client "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/urfave/cli" "k8s.io/client-go/tools/clientcmd" "k8s.io/client-go/tools/clientcmd/api" ) func KubectlCommand() cli.Command { return cli.Command{ Name: "kubectl", Usage: "Run kubectl commands", Description: "Use the current cluster context to run kubectl commands in the cluster", Action: runKubectl, SkipFlagParsing: true, } } func runKubectl(ctx *cli.Context) error { args := ctx.Args() if len(args) > 0 && (args[0] == "-h" || args[0] == "--help") { return cli.ShowCommandHelp(ctx, "kubectl") } path, err := exec.LookPath("kubectl") if err != nil { return fmt.Errorf("kubectl is required to be set in your path to use this "+ "command. See https://kubernetes.io/docs/tasks/tools/install-kubectl/ "+ "for more info. Error: %s", err.Error()) } c, err := GetClient(ctx) if err != nil { return err } config, err := loadConfig(ctx) if err != nil { return err } currentRancherServer, err := config.FocusedServer() if err != nil { return err } currentToken := currentRancherServer.AccessKey t, err := c.ManagementClient.Token.ByID(currentToken) if err != nil { return err } currentUser := t.UserID kubeConfig, err := getKubeConfigForUser(ctx, currentUser) if err != nil { return err } var isTokenValid bool if kubeConfig != nil { tokenID, err := extractKubeconfigTokenID(*kubeConfig) if err != nil { return err } isTokenValid, err = validateToken(tokenID, c.ManagementClient.Token) if err != nil { return err } } if kubeConfig == nil || !isTokenValid { cluster, err := getClusterByID(c, c.UserConfig.FocusedCluster()) if err != nil { return err } config, err := c.ManagementClient.Cluster.ActionGenerateKubeconfig(cluster) if err != nil { return err } kubeConfigBytes := []byte(config.Config) kubeConfig, err = clientcmd.Load(kubeConfigBytes) if err != nil { return err } if err := setKubeConfigForUser(ctx, currentUser, kubeConfig); err != nil { return err } } tmpfile, err := os.CreateTemp("", "rancher-") if err != nil { return err } defer os.Remove(tmpfile.Name()) if err := clientcmd.WriteToFile(*kubeConfig, tmpfile.Name()); err != nil { return err } if err := tmpfile.Close(); err != nil { return err } cmd := exec.Command(path, ctx.Args()...) cmd.Env = append(os.Environ(), "KUBECONFIG="+tmpfile.Name()) cmd.Stdout = os.Stdout cmd.Stderr = os.Stderr cmd.Stdin = os.Stdin err = cmd.Run() if err != nil { return err } return nil } func extractKubeconfigTokenID(kubeconfig api.Config) (string, error) { if len(kubeconfig.AuthInfos) != 1 { return "", fmt.Errorf("invalid kubeconfig, expected to contain exactly 1 user") } var parts []string for _, val := range kubeconfig.AuthInfos { parts = strings.Split(val.Token, ":") if len(parts) != 2 { return "", fmt.Errorf("failed to parse kubeconfig token") } } return parts[0], nil } func validateToken(tokenID string, tokenClient client.TokenOperations) (bool, error) { token, err := tokenClient.ByID(tokenID) if err != nil { if !clientbase.IsNotFound(err) { return false, err } return false, nil } return !token.Expired, nil } 07070100000019000081A4000000000000000000000001673C8685000041E7000000000000000000000000000000000000002800000000rancher-cli-2.10.0/cmd/kubectl_token.gopackage cmd import ( "bytes" "crypto" "crypto/rand" "crypto/rsa" "crypto/tls" "crypto/x509" "encoding/base64" "encoding/json" "errors" "fmt" "io" "math/big" "net/http" url2 "net/url" "os" "os/signal" "runtime" "strconv" "strings" "time" "github.com/rancher/cli/config" apiv3 "github.com/rancher/rancher/pkg/apis/management.cattle.io/v3" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/tidwall/gjson" "github.com/urfave/cli" "golang.org/x/term" ) const deleteExample = ` Example: # Delete a cached credential $ rancher token delete cluster1_c-1234 # Delete multiple cached credentials $ rancher token delete cluster1_c-1234 cluster2_c-2345 # Delete all credentials $ rancher token delete all ` type LoginInput struct { server string userID string clusterID string authProvider string caCerts string skipVerify bool } const ( authProviderURL = "%s/v3-public/authProviders" authTokenURL = "%s/v3-public/authTokens/%s" ) var samlProviders = map[string]bool{ "pingProvider": true, "adfsProvider": true, "keyCloakProvider": true, "oktaProvider": true, "shibbolethProvider": true, } var oauthProviders = map[string]bool{ "azureADProvider": true, } var supportedAuthProviders = map[string]bool{ "localProvider": true, "freeIpaProvider": true, "openLdapProvider": true, "activeDirectoryProvider": true, // all saml providers "pingProvider": true, "adfsProvider": true, "keyCloakProvider": true, "oktaProvider": true, "shibbolethProvider": true, // oauth providers "azureADProvider": true, } func CredentialCommand() cli.Command { configDir, err := ConfigDir() if err != nil { if runtime.GOOS == "windows" { configDir = "%HOME%\\.rancher" } else { configDir = "${HOME}/.rancher" } } return cli.Command{ Name: "token", Usage: "Authenticate and generate new kubeconfig token", Action: runCredential, Flags: []cli.Flag{ cli.StringFlag{ Name: "server", Usage: "Name of rancher server", }, cli.StringFlag{ Name: "user", Usage: "user-id", }, cli.StringFlag{ Name: "cluster", Usage: "cluster-id", }, cli.StringFlag{ Name: "auth-provider", Usage: "Name of Auth Provider to use for authentication", }, cli.StringFlag{ Name: "cacerts", Usage: "Location of CaCerts to use", }, cli.BoolFlag{ Name: "skip-verify", Usage: "Skip verification of the CACerts presented by the Server", }, }, Subcommands: []cli.Command{ { Name: "delete", Usage: fmt.Sprintf("Delete cached token used for kubectl login at [%s] \n %s", configDir, deleteExample), Action: deleteCachedCredential, }, }, } } func runCredential(ctx *cli.Context) error { if ctx.Bool("delete") { return deleteCachedCredential(ctx) } server := ctx.String("server") if server == "" { return errors.New("name of rancher server is required") } url, err := url2.Parse(server) if err != nil { return err } if url.Scheme == "" { server = fmt.Sprintf("https://%s", server) } userID := ctx.String("user") if userID == "" { return errors.New("user-id is required") } clusterID := ctx.String("cluster") cachedCredName := fmt.Sprintf("%s_%s", userID, clusterID) cachedCred, err := loadCachedCredential(ctx, cachedCredName) if err != nil { customPrint(fmt.Errorf("LoadToken: %v", err)) } if cachedCred != nil { return json.NewEncoder(os.Stdout).Encode(cachedCred) } input := &LoginInput{ server: server, userID: userID, clusterID: clusterID, authProvider: ctx.String("auth-provider"), caCerts: ctx.String("cacerts"), skipVerify: ctx.Bool("skip-verify"), } newCred, err := loginAndGenerateCred(input) if err != nil { return err } if err := cacheCredential(ctx, newCred, fmt.Sprintf("%s_%s", userID, clusterID)); err != nil { customPrint(fmt.Errorf("CacheToken: %v", err)) } return json.NewEncoder(os.Stdout).Encode(newCred) } func deleteCachedCredential(ctx *cli.Context) error { if len(ctx.Args()) == 0 { return cli.ShowSubcommandHelp(ctx) } cf, err := loadConfig(ctx) if err != nil { return err } // dir is always set by global default. dir := ctx.GlobalString("config") if len(cf.Servers) == 0 { customPrint(fmt.Sprintf("there are no cached tokens in [%s]", dir)) return nil } if ctx.Args().First() == "all" { customPrint(fmt.Sprintf("removing cached tokens in [%s]", dir)) for _, server := range cf.Servers { server.KubeCredentials = make(map[string]*config.ExecCredential) } return cf.Write() } for _, key := range ctx.Args() { customPrint(fmt.Sprintf("removing [%s]", key)) for _, server := range cf.Servers { server.KubeCredentials[key] = nil } } return cf.Write() } func loadCachedCredential(ctx *cli.Context, key string) (*config.ExecCredential, error) { sc, err := lookupServerConfig(ctx) if err != nil { return nil, err } cred := sc.KubeToken(key) if cred == nil { return cred, nil } ts := cred.Status.ExpirationTimestamp if ts != nil && ts.Time.Before(time.Now()) { cf, err := loadConfig(ctx) if err != nil { return nil, err } cf.Servers[ctx.String("server")].KubeCredentials[key] = nil if err := cf.Write(); err != nil { return nil, err } return nil, nil } return cred, nil } // there is overlap between this and the lookupConfig() function. However, lookupConfig() requires // a server to be previously set in the Config, which might not be the case if rancher token // is run before rancher login. Perhaps we can depricate rancher token down the line and defer // all it does to login. func lookupServerConfig(ctx *cli.Context) (*config.ServerConfig, error) { server := ctx.String("server") if server == "" { return nil, errors.New("name of rancher server is required") } cf, err := loadConfig(ctx) if err != nil { return nil, err } sc := cf.Servers[server] if sc == nil { sc = &config.ServerConfig{ KubeCredentials: make(map[string]*config.ExecCredential), } cf.Servers[server] = sc if err := cf.Write(); err != nil { return nil, err } } return sc, nil } func cacheCredential(ctx *cli.Context, cred *config.ExecCredential, id string) error { // cache only if valid if cred.Status.Token == "" { return nil } server := ctx.String("server") if server == "" { return errors.New("name of rancher server is required") } cf, err := loadConfig(ctx) if err != nil { return err } sc, err := lookupServerConfig(ctx) if err != nil { return err } if sc.KubeCredentials[id] == nil { sc.KubeCredentials = make(map[string]*config.ExecCredential) } sc.KubeCredentials[id] = cred cf.Servers[server] = sc return cf.Write() } func loginAndGenerateCred(input *LoginInput) (*config.ExecCredential, error) { // setup a client with the provided TLS configuration client, err := getClient(input.skipVerify, input.caCerts) if err != nil { return nil, err } authProviders, err := getAuthProviders(input.server) if err != nil { return nil, err } selectedProvider, err := selectAuthProvider(authProviders, input.authProvider) if err != nil { return nil, err } input.authProvider = selectedProvider.GetType() token := managementClient.Token{} if samlProviders[input.authProvider] { token, err = samlAuth(input, client) if err != nil { return nil, err } } else if oauthProviders[input.authProvider] { tokenPtr, err := oauthAuth(input, selectedProvider) if err != nil { return nil, err } token = *tokenPtr } else { customPrint(fmt.Sprintf("Enter credentials for %s \n", input.authProvider)) token, err = basicAuth(input) if err != nil { return nil, err } } cred := &config.ExecCredential{ TypeMeta: config.TypeMeta{ Kind: "ExecCredential", APIVersion: "client.authentication.k8s.io/v1beta1", }, Status: &config.ExecCredentialStatus{}, } cred.Status.Token = token.Token if token.ExpiresAt == "" { return cred, nil } ts, err := time.Parse(time.RFC3339, token.ExpiresAt) if err != nil { customPrint(fmt.Sprintf("\n error parsing time %s %v", token.ExpiresAt, err)) return nil, err } cred.Status.ExpirationTimestamp = &config.Time{Time: ts} return cred, nil } func basicAuth(input *LoginInput) (managementClient.Token, error) { token := managementClient.Token{} username, err := customPrompt("Enter username: ", true) if err != nil { return token, err } password, err := customPrompt("Enter password: ", false) if err != nil { return token, err } responseType := "kubeconfig" if input.clusterID != "" { responseType = fmt.Sprintf("%s_%s", responseType, input.clusterID) } body := fmt.Sprintf(`{"responseType":%q, "username":%q, "password":%q}`, responseType, username, password) url := fmt.Sprintf("%s/v3-public/%ss/%s?action=login", input.server, input.authProvider, strings.ToLower(strings.Replace(input.authProvider, "Provider", "", 1))) response, err := request(http.MethodPost, url, bytes.NewBufferString(body)) if err != nil { return token, nil } apiError := map[string]interface{}{} err = json.Unmarshal(response, &apiError) if err != nil { return token, err } if responseType := apiError["type"]; responseType == "error" { return token, fmt.Errorf("error logging in: code: "+ "[%v] message:[%v]", apiError["code"], apiError["message"]) } err = json.Unmarshal(response, &token) if err != nil { return token, err } return token, nil } func samlAuth(input *LoginInput, client *http.Client) (managementClient.Token, error) { token := managementClient.Token{} privateKey, err := rsa.GenerateKey(rand.Reader, 2048) if err != nil { return token, err } publicKey := privateKey.PublicKey marshalKey, err := json.Marshal(publicKey) if err != nil { return token, err } encodedKey := base64.StdEncoding.EncodeToString(marshalKey) id, err := generateKey() if err != nil { return token, err } responseType := "kubeconfig" if input.clusterID != "" { responseType = fmt.Sprintf("%s_%s", responseType, input.clusterID) } tokenURL := fmt.Sprintf(authTokenURL, input.server, id) req, err := http.NewRequest(http.MethodGet, tokenURL, bytes.NewBuffer(nil)) if err != nil { return token, err } req.Header.Set("content-type", "application/json") req.Header.Set("accept", "application/json") client.Timeout = 300 * time.Second loginRequest := fmt.Sprintf("%s/dashboard/auth/login?requestId=%s&publicKey=%s&responseType=%s", input.server, id, encodedKey, responseType) customPrint(fmt.Sprintf("\nLogin to Rancher Server at %s \n", loginRequest)) interrupt := make(chan os.Signal, 1) signal.Notify(interrupt, os.Interrupt) // timeout for user to login and get token timeout := time.NewTicker(15 * time.Minute) defer timeout.Stop() poll := time.NewTicker(10 * time.Second) defer poll.Stop() for { select { case <-poll.C: res, err := client.Do(req) if err != nil { return token, err } content, err := io.ReadAll(res.Body) if err != nil { res.Body.Close() return token, err } res.Body.Close() err = json.Unmarshal(content, &token) if err != nil { return token, err } if token.Token == "" { continue } decoded, err := base64.StdEncoding.DecodeString(token.Token) if err != nil { return token, err } decryptedBytes, err := privateKey.Decrypt(nil, decoded, &rsa.OAEPOptions{Hash: crypto.SHA256}) if err != nil { panic(err) } token.Token = string(decryptedBytes) // delete token req, err = http.NewRequest(http.MethodDelete, tokenURL, bytes.NewBuffer(nil)) if err != nil { return token, err } req.Header.Set("content-type", "application/json") req.Header.Set("accept", "application/json") client.Timeout = 150 * time.Second res, err = client.Do(req) if err != nil { // log error and use the token if login succeeds customPrint(fmt.Errorf("DeleteToken: %v", err)) } defer res.Body.Close() return token, nil case <-timeout.C: break case <-interrupt: customPrint("received interrupt") break } return token, nil } } type TypedProvider interface { GetType() string } func getAuthProviders(server string) ([]TypedProvider, error) { authProvidersURL := fmt.Sprintf(authProviderURL, server) customPrint(authProvidersURL) response, err := request(http.MethodGet, authProvidersURL, nil) if err != nil { return nil, err } if !gjson.ValidBytes(response) { return nil, fmt.Errorf("invalid JSON response from %s", authProvidersURL) } data := gjson.GetBytes(response, "data").Array() var supportedProviders []TypedProvider for _, provider := range data { providerType := provider.Get("type").String() if providerType != "" && supportedAuthProviders[providerType] { var typedProvider TypedProvider switch providerType { case "azureADProvider": typedProvider = &apiv3.AzureADProvider{} case "localProvider": typedProvider = &apiv3.LocalProvider{} default: typedProvider = &apiv3.AuthProvider{} } err = json.Unmarshal([]byte(provider.Raw), typedProvider) if err != nil { return nil, fmt.Errorf("attempting to decode the auth provider of type %s: %w", providerType, err) } if typedProvider.GetType() == "localProvider" { supportedProviders = append([]TypedProvider{typedProvider}, supportedProviders...) } else { supportedProviders = append(supportedProviders, typedProvider) } } } return supportedProviders, err } func selectAuthProvider(authProviders []TypedProvider, providerType string) (TypedProvider, error) { if len(authProviders) == 0 { return nil, errors.New("no auth provider configured") } // if providerType was specified, look for it if providerType != "" { for _, p := range authProviders { if p.GetType() == providerType { return p, nil } } return nil, fmt.Errorf("provider %s not found", providerType) } // otherwise ask to the user (if more than one) if len(authProviders) == 1 { return authProviders[0], nil } var providers []string for i, val := range authProviders { providers = append(providers, fmt.Sprintf("%d - %s", i, val.GetType())) } try := 0 for try < 3 { customPrint(fmt.Sprintf("Auth providers:\n%v", strings.Join(providers, "\n"))) providerIndexStr, err := customPrompt("Select auth provider: ", true) if err != nil { try++ continue } providerIndex, err := strconv.Atoi(providerIndexStr) if err != nil || (providerIndex < 0 || providerIndex > len(providers)-1) { customPrint("Pick a valid auth provider") try++ continue } return authProviders[providerIndex], nil } return nil, errors.New("invalid auth provider") } func generateKey() (string, error) { characters := "abcdfghjklmnpqrstvwxz12456789" tokenLength := 32 token := make([]byte, tokenLength) for i := range token { r, err := rand.Int(rand.Reader, big.NewInt(int64(len(characters)))) if err != nil { return "", err } token[i] = characters[r.Int64()] } return string(token), nil } // getClient return a client with the provided TLS configuration func getClient(skipVerify bool, caCerts string) (*http.Client, error) { tlsConfig, err := getTLSConfig(skipVerify, caCerts) if err != nil { return nil, err } // clone the DefaultTransport to get the default values transport := http.DefaultTransport.(*http.Transport).Clone() transport.TLSClientConfig = tlsConfig return &http.Client{Transport: transport}, nil } func getTLSConfig(skipVerify bool, caCerts string) (*tls.Config, error) { config := &tls.Config{ InsecureSkipVerify: skipVerify, } if caCerts == "" { return config, nil } // load custom certs cert, err := loadAndVerifyCert(caCerts) if err != nil { return nil, err } roots := x509.NewCertPool() ok := roots.AppendCertsFromPEM([]byte(cert)) if !ok { return nil, err } config.RootCAs = roots return config, nil } func request(method, url string, body io.Reader) ([]byte, error) { var response []byte req, err := http.NewRequest(method, url, body) if err != nil { return nil, err } client, err := getClient(true, "") if err != nil { return nil, err } res, err := client.Do(req) if err != nil { return nil, err } defer res.Body.Close() response, err = io.ReadAll(res.Body) if err != nil { return nil, err } return response, nil } func customPrompt(msg string, show bool) (result string, err error) { fmt.Fprint(os.Stderr, msg) if show { _, err = fmt.Fscan(os.Stdin, &result) } else { var data []byte data, err = term.ReadPassword(int(os.Stdin.Fd())) result = string(data) fmt.Fprintf(os.Stderr, "\n") } return result, err } func customPrint(data interface{}) { fmt.Fprintf(os.Stderr, "%v \n", data) } 0707010000001A000081A4000000000000000000000001673C8685000009BC000000000000000000000000000000000000002E00000000rancher-cli-2.10.0/cmd/kubectl_token_oauth.gopackage cmd import ( "bytes" "context" "encoding/json" "fmt" "net/http" "strings" "github.com/pkg/errors" apiv3 "github.com/rancher/rancher/pkg/apis/management.cattle.io/v3" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "golang.org/x/oauth2" ) func oauthAuth(input *LoginInput, provider TypedProvider) (*managementClient.Token, error) { oauthConfig, err := newOauthConfig(provider) if err != nil { return nil, err } ctx := context.Background() deviceAuthResp, err := oauthConfig.DeviceAuth(ctx) if err != nil { return nil, err } customPrint(fmt.Sprintf( "\nTo sign in, use a web browser to open the page %s and enter the code %s to authenticate.\n", deviceAuthResp.VerificationURI, deviceAuthResp.UserCode, )) oauthToken, err := oauthConfig.DeviceAccessToken(ctx, deviceAuthResp) if err != nil { return nil, err } token, err := rancherLogin(input, provider, oauthToken) if err != nil { return nil, fmt.Errorf("error during rancher login: %w", err) } return token, nil } func newOauthConfig(provider TypedProvider) (*oauth2.Config, error) { var oauthProvider apiv3.OAuthProvider switch p := provider.(type) { case *apiv3.AzureADProvider: oauthProvider = p.OAuthProvider default: return nil, errors.New("provider is not a supported OAuth provider") } return &oauth2.Config{ ClientID: oauthProvider.ClientID, Scopes: oauthProvider.Scopes, Endpoint: oauth2.Endpoint{ AuthURL: oauthProvider.AuthURL, DeviceAuthURL: oauthProvider.DeviceAuthURL, TokenURL: oauthProvider.TokenURL, }, }, nil } func rancherLogin(input *LoginInput, provider TypedProvider, oauthToken *oauth2.Token) (*managementClient.Token, error) { // login with id_token providerName := strings.ToLower(strings.TrimSuffix(input.authProvider, "Provider")) url := fmt.Sprintf("%s/v3-public/%ss/%s?action=login", input.server, provider.GetType(), providerName) responseType := "kubeconfig" if input.clusterID != "" { responseType = fmt.Sprintf("%s_%s", responseType, input.clusterID) } jsonBody, err := json.Marshal(map[string]interface{}{ "responseType": responseType, "id_token": oauthToken.Extra("id_token"), }) if err != nil { return nil, err } b, err := request(http.MethodPost, url, bytes.NewBuffer(jsonBody)) if err != nil { return nil, err } token := &managementClient.Token{} err = json.Unmarshal(b, token) if err != nil { return nil, err } return token, nil } 0707010000001B000081A4000000000000000000000001673C868500000F4E000000000000000000000000000000000000002D00000000rancher-cli-2.10.0/cmd/kubectl_token_test.gopackage cmd import ( "fmt" "net/http" "net/http/httptest" "testing" apiv3 "github.com/rancher/rancher/pkg/apis/management.cattle.io/v3" "github.com/stretchr/testify/assert" ) func Test_getAuthProviders(t *testing.T) { setupServer := func(response string) *httptest.Server { return httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { fmt.Fprint(w, response) })) } tt := []struct { name string server *httptest.Server expectedProviders []TypedProvider expectedErr string }{ { name: "response ok", server: setupServer(responseOK), expectedProviders: []TypedProvider{ &apiv3.LocalProvider{ AuthProvider: apiv3.AuthProvider{ Type: "localProvider", }, }, &apiv3.AzureADProvider{ AuthProvider: apiv3.AuthProvider{ Type: "azureADProvider", }, RedirectURL: "https://login.microsoftonline.com/258928db-3ed6-49fb-9a7e-52e492ffb066/oauth2/v2.0/authorize?client_id=56168f69-a732-48e2-aa21-8aa0909d0976&redirect_uri=https://rancher.mydomain.com/verify-auth-azure&response_type=code&scope=openid", TenantID: "258928db-3ed6-49fb-9a7e-52e492ffb066", OAuthProvider: apiv3.OAuthProvider{ ClientID: "56168f69-a732-48e2-aa21-8aa0909d0976", Scopes: []string{"openid", "profile", "email"}, OAuthEndpoint: apiv3.OAuthEndpoint{ AuthURL: "https://login.microsoftonline.com/258928db-3ed6-49fb-9a7e-52e492ffb066/oauth2/v2.0/authorize", DeviceAuthURL: "https://login.microsoftonline.com/258928db-3ed6-49fb-9a7e-52e492ffb066/oauth2/v2.0/devicecode", TokenURL: "https://login.microsoftonline.com/258928db-3ed6-49fb-9a7e-52e492ffb066/oauth2/v2.0/token", }, }, }, }, }, { name: "json error", server: setupServer(`hnjskjnksnj`), expectedErr: "invalid JSON response from", }, } for _, tc := range tt { tc := tc t.Run(tc.name, func(t *testing.T) { t.Cleanup(tc.server.Close) got, err := getAuthProviders(tc.server.URL) if tc.expectedErr != "" { assert.ErrorContains(t, err, tc.expectedErr) assert.Nil(t, got) } else { assert.Equal(t, tc.expectedProviders, got) assert.Nil(t, err) } }) } } var responseOK = `{ "data": [ { "actions": { "login": "…/v3-public/azureADProviders/azuread?action=login" }, "authUrl": "https://login.microsoftonline.com/258928db-3ed6-49fb-9a7e-52e492ffb066/oauth2/v2.0/authorize", "baseType": "authProvider", "clientId": "56168f69-a732-48e2-aa21-8aa0909d0976", "creatorId": null, "deviceAuthUrl": "https://login.microsoftonline.com/258928db-3ed6-49fb-9a7e-52e492ffb066/oauth2/v2.0/devicecode", "id": "azuread", "links": { "self": "…/v3-public/azureADProviders/azuread" }, "redirectUrl": "https://login.microsoftonline.com/258928db-3ed6-49fb-9a7e-52e492ffb066/oauth2/v2.0/authorize?client_id=56168f69-a732-48e2-aa21-8aa0909d0976&redirect_uri=https://rancher.mydomain.com/verify-auth-azure&response_type=code&scope=openid", "scopes": [ "openid", "profile", "email" ], "tenantId": "258928db-3ed6-49fb-9a7e-52e492ffb066", "tokenUrl": "https://login.microsoftonline.com/258928db-3ed6-49fb-9a7e-52e492ffb066/oauth2/v2.0/token", "type": "azureADProvider" }, { "actions": { "login": "…/v3-public/localProviders/local?action=login" }, "baseType": "authProvider", "creatorId": null, "id": "local", "links": { "self": "…/v3-public/localProviders/local" }, "type": "localProvider" } ] }` 0707010000001C000081A4000000000000000000000001673C8685000020EF000000000000000000000000000000000000002000000000rancher-cli-2.10.0/cmd/login.gopackage cmd import ( "bufio" "crypto/tls" "encoding/json" "errors" "fmt" "io" "net/http" "net/url" "os" "strconv" "strings" "github.com/sirupsen/logrus" "github.com/grantae/certinfo" "github.com/rancher/cli/cliclient" "github.com/rancher/cli/config" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/urfave/cli" ) type LoginData struct { Project managementClient.Project Index int ClusterName string } type CACertResponse struct { Name string `json:"name"` Value string `json:"value"` } func LoginCommand() cli.Command { return cli.Command{ Name: "login", Aliases: []string{"l"}, Usage: "Login to a Rancher server", Action: loginSetup, ArgsUsage: "[SERVERURL]", Flags: []cli.Flag{ cli.StringFlag{ Name: "context", Usage: "Set the context during login", }, cli.StringFlag{ Name: "token,t", Usage: "Token from the Rancher UI", }, cli.StringFlag{ Name: "cacert", Usage: "Location of the CACerts to use", }, cli.StringFlag{ Name: "name", Usage: "Name of the Server", }, cli.BoolFlag{ Name: "skip-verify", Usage: "Skip verification of the CACerts presented by the Server", }, }, } } func loginSetup(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowCommandHelp(ctx, "login") } cf, err := loadConfig(ctx) if err != nil { return err } serverName := ctx.String("name") if serverName == "" { serverName = "rancherDefault" } serverConfig := &config.ServerConfig{} // Validate the url and drop the path u, err := url.ParseRequestURI(ctx.Args().First()) if err != nil { return fmt.Errorf("Failed to parse SERVERURL (%s), make sure it is a valid HTTPS URL (e.g. https://rancher.yourdomain.com or https://1.1.1.1). Error: %s", ctx.Args().First(), err) } u.Path = "" serverConfig.URL = u.String() if ctx.String("token") != "" { auth := SplitOnColon(ctx.String("token")) if len(auth) != 2 { return errors.New("invalid token") } serverConfig.AccessKey = auth[0] serverConfig.SecretKey = auth[1] serverConfig.TokenKey = ctx.String("token") } else { // This can be removed once username and password is accepted return errors.New("token flag is required") } if ctx.String("cacert") != "" { cert, err := loadAndVerifyCert(ctx.String("cacert")) if err != nil { return err } serverConfig.CACerts = cert } c, err := cliclient.NewManagementClient(serverConfig) if err != nil { if _, ok := err.(*url.Error); ok && strings.Contains(err.Error(), "certificate signed by unknown authority") { // no cert was provided and it's most likely a self signed cert if // we get here so grab the cacert and see if the user accepts the server c, err = getCertFromServer(ctx, serverConfig) if err != nil { return err } } else { return err } } proj, err := getProjectContext(ctx, c) if err != nil { return err } // Set the default server and proj for the user serverConfig.Project = proj cf.CurrentServer = serverName cf.Servers[serverName] = serverConfig err = cf.Write() if err != nil { return err } return nil } func getProjectContext(ctx *cli.Context, c *cliclient.MasterClient) (string, error) { // If context is given if ctx.String("context") != "" { context := ctx.String("context") // Check if given context is in valid format _, _, err := parseClusterAndProjectID(context) if err != nil { return "", fmt.Errorf("Unable to parse context (%s). Please provide context as local:p-xxxxx, c-xxxxx:p-xxxxx, c-xxxxx:project-xxxxx, c-m-xxxxxxxx:p-xxxxx or c-m-xxxxxxxx:project-xxxxx", context) } // Check if context exists _, err = Lookup(c, context, "project") if err != nil { return "", fmt.Errorf("Unable to find context (%s). Make sure the context exists and you have permissions to use it. Error: %s", context, err) } return context, nil } projectCollection, err := c.ManagementClient.Project.List(defaultListOpts(ctx)) if err != nil { return "", err } projDataLen := len(projectCollection.Data) if projDataLen == 0 { logrus.Warn("No projects found, context could not be set. Please create a project and run `rancher login` again.") return "", nil } if projDataLen == 1 { logrus.Infof("Only 1 project available: %s", projectCollection.Data[0].Name) return projectCollection.Data[0].ID, nil } if projDataLen == 2 { var sysProj bool var defaultID string for _, proj := range projectCollection.Data { if proj.Name == "Default" { defaultID = proj.ID } if proj.Name == "System" { sysProj = true } if sysProj && defaultID != "" { return defaultID, nil } } } clusterNames, err := getClusterNames(ctx, c) if err != nil { return "", err } writer := NewTableWriter([][]string{ {"NUMBER", "Index"}, {"CLUSTER NAME", "ClusterName"}, {"PROJECT ID", "Project.ID"}, {"PROJECT NAME", "Project.Name"}, {"PROJECT DESCRIPTION", "Project.Description"}, }, ctx) for i, item := range projectCollection.Data { writer.Write(&LoginData{ Project: item, Index: i + 1, ClusterName: clusterNames[item.ClusterID], }) } writer.Close() if nil != writer.Err() { return "", writer.Err() } fmt.Print("Select a Project:") reader := bufio.NewReader(os.Stdin) errMessage := fmt.Sprintf("Invalid input, enter a number between 1 and %v: ", len(projectCollection.Data)) var selection int for { input, err := reader.ReadString('\n') if err != nil { return "", err } input = strings.TrimSpace(input) if input != "" { i, err := strconv.Atoi(input) if err != nil { fmt.Print(errMessage) continue } if i <= len(projectCollection.Data) && i != 0 { selection = i - 1 break } fmt.Print(errMessage) continue } } return projectCollection.Data[selection].ID, nil } func getCertFromServer(ctx *cli.Context, cf *config.ServerConfig) (*cliclient.MasterClient, error) { req, err := http.NewRequest("GET", cf.URL+"/v3/settings/cacerts", nil) if err != nil { return nil, err } req.SetBasicAuth(cf.AccessKey, cf.SecretKey) tr := &http.Transport{ TLSClientConfig: &tls.Config{InsecureSkipVerify: true}, } client := &http.Client{Transport: tr} res, err := client.Do(req) if err != nil { return nil, err } defer res.Body.Close() content, err := io.ReadAll(res.Body) if err != nil { return nil, err } var certReponse *CACertResponse err = json.Unmarshal(content, &certReponse) if err != nil { return nil, fmt.Errorf("Unable to parse response from %s/v3/settings/cacerts\nError: %s\nResponse:\n%s", cf.URL, err, content) } cert, err := verifyCert([]byte(certReponse.Value)) if err != nil { return nil, err } // Get the server cert chain in a printable form serverCerts, err := processServerChain(res) if err != nil { return nil, err } if !ctx.Bool("skip-verify") { if ok := verifyUserAcceptsCert(serverCerts, cf.URL); !ok { return nil, errors.New("CACert of server was not accepted, unable to login") } } cf.CACerts = cert return cliclient.NewManagementClient(cf) } func verifyUserAcceptsCert(certs []string, url string) bool { fmt.Printf("The authenticity of server '%s' can't be established.\n", url) fmt.Printf("Cert chain is : %v \n", certs) fmt.Print("Do you want to continue connecting (yes/no)? ") scanner := bufio.NewScanner(os.Stdin) for scanner.Scan() { input := scanner.Text() input = strings.ToLower(strings.TrimSpace(input)) if input == "yes" || input == "y" { return true } else if input == "no" || input == "n" { return false } fmt.Printf("Please type 'yes' or 'no': ") } return false } func processServerChain(res *http.Response) ([]string, error) { var allCerts []string for _, cert := range res.TLS.PeerCertificates { result, err := certinfo.CertificateText(cert) if err != nil { return allCerts, err } allCerts = append(allCerts, result) } return allCerts, nil } func loginContext(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } cluster, err := getClusterByID(c, c.UserConfig.FocusedCluster()) if err != nil { return err } clusterName := getClusterName(cluster) project, err := getProjectByID(c, c.UserConfig.Project) if err != nil { return err } fmt.Printf("Cluster:%s Project:%s\n", clusterName, project.Name) return nil } 0707010000001D000081A4000000000000000000000001673C868500000952000000000000000000000000000000000000002200000000rancher-cli-2.10.0/cmd/machine.gopackage cmd import ( "fmt" "github.com/rancher/cli/cliclient" capiClient "github.com/rancher/rancher/pkg/client/generated/cluster/v1beta1" "github.com/urfave/cli" ) type MachineData struct { ID string Machine capiClient.Machine Name string } func MachineCommand() cli.Command { return cli.Command{ Name: "machines", Aliases: []string{"machine"}, Usage: "Operations on machines", Action: defaultAction(machineLs), Subcommands: []cli.Command{ { Name: "ls", Usage: "List machines", Description: "\nLists all machines in the current cluster.", ArgsUsage: "None", Action: machineLs, Flags: []cli.Flag{ cli.StringFlag{ Name: "format", Usage: "'json', 'yaml' or Custom format: '{{.Machine.ID}} {{.Machine.Name}}'", }, quietFlag, }, }, }, } } func machineLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } collection, err := getMachinesList(ctx, c) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Name"}, {"PHASE", "Machine.Status.Phase"}, }, ctx) defer writer.Close() for _, item := range collection.Data { writer.Write(&MachineData{ ID: item.ID, Machine: item, Name: getMachineName(item), }) } return writer.Err() } func getMachinesList( ctx *cli.Context, c *cliclient.MasterClient, ) (*capiClient.MachineCollection, error) { filter := defaultListOpts(ctx) return c.CAPIClient.Machine.List(filter) } func getMachineByNodeName( ctx *cli.Context, c *cliclient.MasterClient, nodeName string, ) (capiClient.Machine, error) { machineCollection, err := getMachinesList(ctx, c) if err != nil { return capiClient.Machine{}, err } for _, machine := range machineCollection.Data { if machine.Status.NodeRef != nil && machine.Status.NodeRef.Name == nodeName { return machine, nil } } return capiClient.Machine{}, fmt.Errorf("no machine found with associated to node [%s], run "+ "`rancher machines` to see available nodes", nodeName) } func getMachineName(machine capiClient.Machine) string { if machine.Name != "" { return machine.Name } else if machine.Status.NodeRef != nil { return machine.Status.NodeRef.Name } else if machine.InfrastructureRef != nil { return machine.InfrastructureRef.Name } return machine.ID } 0707010000001E000081A4000000000000000000000001673C868500009497000000000000000000000000000000000000002A00000000rancher-cli-2.10.0/cmd/multiclusterapp.gopackage cmd import ( "fmt" "reflect" "sort" "strings" "time" "github.com/rancher/cli/cliclient" "github.com/rancher/norman/types" "github.com/rancher/norman/types/slice" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/sirupsen/logrus" "github.com/urfave/cli" ) const ( installMultiClusterAppDescription = ` Install a multi-cluster app in the current Rancher server. This defaults to the newest version of the app template. Specify a version using '--version' if required. Example: # Install the redis template with no other options $ rancher multiclusterapp install redis appFoo # Install the redis template and specify an answers file location $ rancher multiclusterapp install --answers /example/answers.yaml redis appFoo # Install the redis template and set multiple answers and the version to install $ rancher multiclusterapp install --set foo=bar --set-string baz=bunk --version 1.0.1 redis appFoo # Install the redis template and set target projects to install $ rancher multiclusterapp install --target mycluster:Default --target c-98pjr:p-w6c5f redis appFoo # Block cli until installation has finished or encountered an error. Use after multiclusterapp install. $ rancher wait <multiclusterapp-id> ` upgradeStrategySimultaneously = "simultaneously" upgradeStrategyRollingUpdate = "rolling-update" argUpgradeStrategy = "upgrade-strategy" argUpgradeBatchSize = "upgrade-batch-size" argUpgradeBatchInterval = "upgrade-batch-interval" ) var ( memberAccessTypes = []string{"owner", "member", "read-only"} upgradeStrategies = []string{upgradeStrategySimultaneously, upgradeStrategyRollingUpdate} ) type MultiClusterAppData struct { ID string App managementClient.MultiClusterApp Version string Targets string } type scopeAnswers struct { Answers map[string]string AnswersSetString map[string]string } func MultiClusterAppCommand() cli.Command { appLsFlags := []cli.Flag{ formatFlag, cli.BoolFlag{ Name: "quiet,q", Usage: "Only display IDs", }, } return cli.Command{ Name: "multiclusterapps", Aliases: []string{"multiclusterapp", "mcapps", "mcapp"}, Usage: "Operations with multi-cluster apps", Action: defaultAction(multiClusterAppLs), Flags: appLsFlags, Subcommands: []cli.Command{ { Name: "ls", Usage: "List multi-cluster apps", Description: "\nList all multi-cluster apps in the current Rancher server", ArgsUsage: "None", Action: multiClusterAppLs, Flags: appLsFlags, }, { Name: "delete", Usage: "Delete a multi-cluster app", Action: multiClusterAppDelete, ArgsUsage: "[APP_NAME]", }, { Name: "install", Usage: "Install a multi-cluster app", Description: installMultiClusterAppDescription, Action: multiClusterAppTemplateInstall, ArgsUsage: "[TEMPLATE_NAME, APP_NAME]...", Flags: []cli.Flag{ cli.StringFlag{ Name: "answers,a", Usage: "Path to an answers file, the format of the file is a map with key:value. This supports JSON and YAML.", }, cli.StringFlag{ Name: "values", Usage: "Path to a helm values file.", }, cli.StringSliceFlag{ Name: "set", Usage: "Set answers for the template, can be used multiple times. You can set overriding answers for specific clusters or projects " + "by providing cluster ID or project ID as the prefix. Example: --set foo=bar --set c-rvcrl:foo=bar --set c-rvcrl:p-8w2x8:foo=bar", }, cli.StringSliceFlag{ Name: "set-string", Usage: "Set string answers for the template (Skips Helm's type conversion), can be used multiple times. You can set overriding answers for specific clusters or projects " + "by providing cluster ID or project ID as the prefix. Example: --set-string foo=bar --set-string c-rvcrl:foo=bar --set-string c-rvcrl:p-8w2x8:foo=bar", }, cli.StringFlag{ Name: "version", Usage: "Version of the template to use", }, cli.BoolFlag{ Name: "no-prompt", Usage: "Suppress asking questions and use the default values when required answers are not provided", }, cli.StringSliceFlag{ Name: "target,t", Usage: "Target project names/ids to install the app into", }, cli.StringSliceFlag{ Name: "role", Usage: "Set roles required to launch/manage the apps in target projects. For example, set \"project-member\" role when the app needs to manage resources " + "in the projects in which it is deployed. Or set \"cluster-owner\" role when the app needs to manage resources in the clusters in which it is deployed. " + "(default: \"project-member\")", }, cli.StringSliceFlag{ Name: "member", Usage: "Set members of the app, with the same access type defined by --member-access-type", }, cli.StringFlag{ Name: "member-access-type", Usage: "Access type of the members. Specify only one value, and it applies to all members defined by --member. Valid options are 'owner', 'member' and 'read-only'", Value: "owner", }, cli.StringFlag{ Name: argUpgradeStrategy, Usage: "Strategy for upgrade. Valid options are \"rolling-update\" and \"simultaneously\"", Value: upgradeStrategySimultaneously, }, cli.Int64Flag{ Name: argUpgradeBatchSize, Usage: "The number of apps in target projects to be upgraded at a time. Only used if --upgrade-strategy is rolling-update.", Value: 1, }, cli.Int64Flag{ Name: argUpgradeBatchInterval, Usage: "The number of seconds between updating the next app during upgrade. Only used if --upgrade-strategy is rolling-update.", Value: 1, }, cli.IntFlag{ Name: "helm-timeout", Usage: "Amount of time for helm to wait for k8s commands (default is 300 secs). Example: --helm-timeout 600", Value: 300, }, cli.BoolFlag{ Name: "helm-wait", Usage: "Helm will wait for as long as timeout value, for installed resources to be ready (pods, PVCs, deployments, etc.). Example: --helm-wait", }, }, }, { Name: "rollback", Usage: "Rollback a multi-cluster app to a previous version", Action: multiClusterAppRollback, ArgsUsage: "[APP_NAME/APP_ID, REVISION_ID/REVISION_NAME]", Flags: []cli.Flag{ cli.BoolFlag{ Name: "show-revisions,r", Usage: "Show revisions available to rollback to", }, }, }, { Name: "upgrade", Usage: "Upgrade an app to a newer version", Action: multiClusterAppUpgrade, ArgsUsage: "[APP_NAME/APP_ID VERSION]", Flags: []cli.Flag{ cli.StringFlag{ Name: "answers,a", Usage: "Path to an answers file, the format of the file is a map with key:value. Supports JSON and YAML", }, cli.StringFlag{ Name: "values", Usage: "Path to a helm values file.", }, cli.StringSliceFlag{ Name: "set", Usage: "Set answers for the template, can be used multiple times. You can set overriding answers for specific clusters or projects " + "by providing cluster ID or project ID as the prefix. Example: --set foo=bar --set c-rvcrl:foo=bar --set c-rvcrl:p-8w2x8:foo=bar", }, cli.StringSliceFlag{ Name: "set-string", Usage: "Set string answers for the template (Skips Helm's type conversion), can be used multiple times. You can set overriding answers for specific clusters or projects " + "by providing cluster ID or project ID as the prefix. Example: --set-string foo=bar --set-string c-rvcrl:foo=bar --set-string c-rvcrl:p-8w2x8:foo=bar", }, cli.BoolFlag{ Name: "reset", Usage: "Reset all catalog app answers", }, cli.StringSliceFlag{ Name: "role,r", Usage: "Set roles required to launch/manage the apps in target projects. Specified roles on upgrade will override all the original roles. " + "For example, provide all existing roles if you want to add additional roles. Leave it empty to keep current roles", }, cli.BoolFlag{ Name: "show-versions,v", Usage: "Display versions available to upgrade to", }, cli.StringFlag{ Name: argUpgradeStrategy, Usage: "Strategy for upgrade. Valid options are \"rolling-update\" and \"simultaneously\"", }, cli.Int64Flag{ Name: argUpgradeBatchSize, Usage: "The number of apps in target projects to be upgraded at a time. Only used if --upgrade-strategy is rolling-update.", }, cli.Int64Flag{ Name: argUpgradeBatchInterval, Usage: "The number of seconds between updating the next app during upgrade. Only used if --upgrade-strategy is rolling-update.", }, }, }, { Name: "add-project", Usage: "Add target projects to a multi-cluster app", Action: addMcappTargetProject, Description: "Examples:\n #Add 'p1' project in cluster 'mycluster' to target projects of a multi-cluster app named 'myapp'\n rancher multiclusterapp add-project myapp mycluster:p1\n", ArgsUsage: "[APP_NAME/APP_ID, CLUSTER_NAME:PROJECT_NAME/PROJECT_ID...]", Flags: []cli.Flag{ cli.StringFlag{ Name: "answers,a", Usage: "Path to an answers file that provides overriding answers for the new target projects, the format of the file is a map with key:value. Supports JSON and YAML", }, cli.StringFlag{ Name: "values", Usage: "Path to a helm values file that provides overriding answers for the new target projects", }, cli.StringSliceFlag{ Name: "set", Usage: "Set overriding answers for the new target projects", }, cli.StringSliceFlag{ Name: "set-string", Usage: "Set overriding string answers for the new target projects", }, }, }, { Name: "delete-project", Usage: "Delete target projects from a multi-cluster app", Action: deleteMcappTargetProject, Description: "Examples:\n #Delete 'p1' project in cluster 'mycluster' from target projects of a multi-cluster app named 'myapp'\n rancher multiclusterapp delete-project myapp mycluster:p1\n", ArgsUsage: "[APP_NAME/APP_ID, CLUSTER_NAME:PROJECT_NAME/PROJECT_ID...]", }, { Name: "add-member", Usage: "Add members to a multi-cluster app", Action: addMcappMember, Description: "Examples:\n #Add 'user1' and 'user2' as the owners of a multi-cluster app named 'myapp'\n rancher multiclusterapp add-member myapp owner user1 user2\n", ArgsUsage: "[APP_NAME/APP_ID, ACCESS_TYPE, USER_NAME/USER_ID...]", }, { Name: "delete-member", Usage: "Delete members from a multi-cluster app", Action: deleteMcappMember, Description: "Examples:\n #Delete the membership of a user named 'user1' from a multi-cluster app named 'myapp'\n rancher multiclusterapp delete-member myapp user1\n", ArgsUsage: "[APP_NAME/APP_ID, USER_NAME/USER_ID...]", }, { Name: "list-members", Aliases: []string{"lm"}, Usage: "List current members of a multi-cluster app", ArgsUsage: "[APP_NAME/APP_ID]", Action: listMultiClusterAppMembers, Flags: []cli.Flag{ formatFlag, }, }, { Name: "list-answers", Aliases: []string{"la"}, Usage: "List current answers of a multi-cluster app", ArgsUsage: "[APP_NAME/APP_ID]", Action: listMultiClusterAppAnswers, Flags: []cli.Flag{ formatFlag, }, }, { Name: "list-templates", Aliases: []string{"lt"}, Usage: "List templates available for installation", Description: "\nList all app templates in the current Rancher server", ArgsUsage: "None", Action: globalTemplateLs, Flags: []cli.Flag{ formatFlag, cli.StringFlag{ Name: "catalog", Usage: "Specify the catalog to list templates for", }, }, }, { Name: "show-template", Aliases: []string{"st"}, Usage: "Show versions available to install for an app template", Description: "\nShow all available versions of an app template", ArgsUsage: "[TEMPLATE_ID]", Action: templateShow, }, { Name: "show-app", Aliases: []string{"sa"}, Usage: "Show an app's available versions and revisions", ArgsUsage: "[APP_NAME/APP_ID]", Action: showMultiClusterApp, Flags: []cli.Flag{ formatFlag, cli.BoolFlag{ Name: "show-roles", Usage: "Show roles required to manage the app", }, }, }, }, } } func multiClusterAppLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } collection, err := c.ManagementClient.MultiClusterApp.List(defaultListOpts(ctx)) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "App.Name"}, {"STATE", "App.State"}, {"VERSION", "Version"}, {"TARGET_PROJECTS", "Targets"}, }, ctx) defer writer.Close() clusterCache, projectCache, err := getClusterProjectMap(ctx, c.ManagementClient) if err != nil { return err } templateVersionCache := make(map[string]string) for _, item := range collection.Data { version, err := getTemplateVersion(c.ManagementClient, templateVersionCache, item.TemplateVersionID) if err != nil { return err } targetNames := getReadableTargetNames(clusterCache, projectCache, item.Targets) writer.Write(&MultiClusterAppData{ ID: item.ID, App: item, Version: version, Targets: strings.Join(targetNames, ","), }) } return writer.Err() } func getTemplateVersion(client *managementClient.Client, templateVersionCache map[string]string, ID string) (string, error) { var version string if cachedVersion, ok := templateVersionCache[ID]; ok { version = cachedVersion } else { templateVersion, err := client.TemplateVersion.ByID(ID) if err != nil { return "", err } templateVersionCache[templateVersion.ID] = templateVersion.Version version = templateVersion.Version } return version, nil } func getClusterProjectMap(ctx *cli.Context, client *managementClient.Client) (map[string]managementClient.Cluster, map[string]managementClient.Project, error) { clusters := make(map[string]managementClient.Cluster) clusterCollectionData, err := listAllClusters(ctx, client) if err != nil { return nil, nil, err } for _, c := range clusterCollectionData { clusters[c.ID] = c } projects := make(map[string]managementClient.Project) projectCollectionData, err := listAllProjects(ctx, client) if err != nil { return nil, nil, err } for _, p := range projectCollectionData { projects[p.ID] = p } return clusters, projects, nil } func listAllClusters(ctx *cli.Context, client *managementClient.Client) ([]managementClient.Cluster, error) { clusterCollection, err := client.Cluster.List(defaultListOpts(ctx)) if err != nil { return nil, err } clusterCollectionData := clusterCollection.Data for { clusterCollection, err = clusterCollection.Next() if err != nil { return nil, err } if clusterCollection == nil { break } clusterCollectionData = append(clusterCollectionData, clusterCollection.Data...) if !clusterCollection.Pagination.Partial { break } } return clusterCollectionData, nil } func listAllProjects(ctx *cli.Context, client *managementClient.Client) ([]managementClient.Project, error) { projectCollection, err := client.Project.List(defaultListOpts(ctx)) if err != nil { return nil, err } projectCollectionData := projectCollection.Data for { projectCollection, err = projectCollection.Next() if err != nil { return nil, err } if projectCollection == nil { break } projectCollectionData = append(projectCollectionData, projectCollection.Data...) if !projectCollection.Pagination.Partial { break } } return projectCollectionData, nil } func getReadableTargetNames(clusterCache map[string]managementClient.Cluster, projectCache map[string]managementClient.Project, targets []managementClient.Target) []string { var targetNames []string for _, target := range targets { projectID := target.ProjectID clusterID, _ := parseScope(projectID) cluster, ok := clusterCache[clusterID] if !ok { logrus.Debugf("Cannot get readable name for target %q, showing ID", target.ProjectID) targetNames = append(targetNames, target.ProjectID) continue } project, ok := projectCache[projectID] if !ok { logrus.Debugf("Cannot get readable name for target %q, showing ID", target.ProjectID) targetNames = append(targetNames, target.ProjectID) continue } targetNames = append(targetNames, concatScope(cluster.Name, project.Name)) } return targetNames } func multiClusterAppDelete(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } for _, name := range ctx.Args() { _, app, err := searchForMcapp(c, name) if err != nil { return err } err = c.ManagementClient.MultiClusterApp.Delete(app) if err != nil { return err } } return nil } func multiClusterAppUpgrade(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } if ctx.Bool("show-versions") { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } _, app, err := searchForMcapp(c, ctx.Args().First()) if err != nil { return err } return outputMultiClusterAppVersions(ctx, c, app) } if ctx.NArg() != 2 { return cli.ShowSubcommandHelp(ctx) } upgradeStrategy := strings.ToLower(ctx.String(argUpgradeStrategy)) if ctx.IsSet(argUpgradeStrategy) && !slice.ContainsString(upgradeStrategies, upgradeStrategy) { return fmt.Errorf("invalid upgrade-strategy %q, supported values are \"rolling-update\" and \"simultaneously\"", upgradeStrategy) } _, app, err := searchForMcapp(c, ctx.Args().First()) if err != nil { return err } update := make(map[string]interface{}) answers, answersSetString := fromMultiClusterAppAnswers(app.Answers) answers, answersSetString, err = processAnswerUpdates(ctx, answers, answersSetString) if err != nil { return err } update["answers"], err = toMultiClusterAppAnswers(c, answers, answersSetString) if err != nil { return err } version := ctx.Args().Get(1) templateVersion, err := c.ManagementClient.TemplateVersion.ByID(app.TemplateVersionID) if err != nil { return err } toUpgradeTemplateversionID := strings.TrimSuffix(templateVersion.ID, templateVersion.Version) + version // Check if the template version is valid before applying it _, err = c.ManagementClient.TemplateVersion.ByID(toUpgradeTemplateversionID) if err != nil { templateName := strings.TrimSuffix(toUpgradeTemplateversionID, "-"+version) return fmt.Errorf( "version %s for template %s is invalid, run 'rancher mcapp show-template %s' for available versions", version, templateName, templateName, ) } update["templateVersionId"] = toUpgradeTemplateversionID roles := ctx.StringSlice("role") if len(roles) > 0 { update["roles"] = roles } else { update["roles"] = app.Roles } if upgradeStrategy == upgradeStrategyRollingUpdate { update["upgradeStrategy"] = &managementClient.UpgradeStrategy{ RollingUpdate: &managementClient.RollingUpdate{ BatchSize: ctx.Int64(argUpgradeBatchSize), Interval: ctx.Int64(argUpgradeBatchInterval), }, } } else if upgradeStrategy == upgradeStrategySimultaneously { update["upgradeStrategy"] = nil } if _, err := c.ManagementClient.MultiClusterApp.Update(app, update); err != nil { return err } return nil } func multiClusterAppRollback(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } resource, app, err := searchForMcapp(c, ctx.Args().First()) if err != nil { return err } if ctx.Bool("show-revisions") { return outputMultiClusterAppRevisions(ctx, c, resource, app) } if ctx.NArg() != 2 { return cli.ShowSubcommandHelp(ctx) } revisionResource, err := Lookup(c, ctx.Args().Get(1), managementClient.MultiClusterAppRevisionType) if err != nil { return err } rr := &managementClient.MultiClusterAppRollbackInput{ RevisionID: revisionResource.ID, } if err := c.ManagementClient.MultiClusterApp.ActionRollback(app, rr); err != nil { return err } return nil } func multiClusterAppTemplateInstall(ctx *cli.Context) error { if ctx.NArg() > 2 { return cli.ShowSubcommandHelp(ctx) } templateName := ctx.Args().First() appName := ctx.Args().Get(1) c, err := GetClient(ctx) if err != nil { return err } roles := ctx.StringSlice("role") if len(roles) == 0 { // Handle the default here because the cli default value for stringSlice do not get overridden. roles = []string{"project-member"} } app := &managementClient.MultiClusterApp{ Name: appName, Roles: roles, } upgradeStrategy := strings.ToLower(ctx.String(argUpgradeStrategy)) if !slice.ContainsString(upgradeStrategies, upgradeStrategy) { return fmt.Errorf("invalid upgrade-strategy %q, supported values are \"rolling-update\" and \"simultaneously\"", upgradeStrategy) } else if upgradeStrategy == upgradeStrategyRollingUpdate { app.UpgradeStrategy = &managementClient.UpgradeStrategy{ RollingUpdate: &managementClient.RollingUpdate{ BatchSize: ctx.Int64(argUpgradeBatchSize), Interval: ctx.Int64(argUpgradeBatchInterval), }, } } resource, err := Lookup(c, templateName, managementClient.TemplateType) if err != nil { return err } template, err := getFilteredTemplate(ctx, c, resource.ID) if err != nil { return err } latestVersion, err := getTemplateLatestVersion(template) if err != nil { return err } templateVersionID := templateVersionIDFromVersionLink(template.VersionLinks[latestVersion]) userVersion := ctx.String("version") if userVersion != "" { if link, ok := template.VersionLinks[userVersion]; ok { templateVersionID = templateVersionIDFromVersionLink(link) } else { return fmt.Errorf( "version %s for template %s is invalid, run 'rancher mcapp show-template %s' for a list of versions", userVersion, templateName, templateName, ) } } templateVersion, err := c.ManagementClient.TemplateVersion.ByID(templateVersionID) if err != nil { return err } interactive := !ctx.Bool("no-prompt") answers, answersSetString, err := processAnswerInstall(ctx, templateVersion, nil, nil, interactive, true) if err != nil { return err } projectIDs, err := lookupProjectIDsFromTargets(c, ctx.StringSlice("target")) if err != nil { return err } for _, target := range projectIDs { app.Targets = append(app.Targets, managementClient.Target{ ProjectID: target, }) } if len(projectIDs) == 0 { app.Targets = append(app.Targets, managementClient.Target{ ProjectID: c.UserConfig.Project, }) } app.Answers, err = toMultiClusterAppAnswers(c, answers, answersSetString) if err != nil { return err } app.TemplateVersionID = templateVersionID accessType := strings.ToLower(ctx.String("member-access-type")) if !slice.ContainsString(memberAccessTypes, accessType) { return fmt.Errorf("invalid access type %q, supported values are \"owner\",\"member\" and \"read-only\"", accessType) } members, err := addMembersByNames(ctx, c, app.Members, ctx.StringSlice("member"), accessType) if err != nil { return err } app.Members = members app.Wait = ctx.Bool("helm-wait") app.Timeout = ctx.Int64("helm-timeout") app, err = c.ManagementClient.MultiClusterApp.Create(app) if err != nil { return err } fmt.Printf("Installing multi-cluster app %q...\n", app.Name) return nil } func lookupProjectIDsFromTargets(c *cliclient.MasterClient, targets []string) ([]string, error) { var projectIDs []string for _, target := range targets { projectID, err := lookupProjectIDFromProjectScope(c, target) if err != nil { return nil, err } projectIDs = append(projectIDs, projectID) } return projectIDs, nil } func lookupClusterIDFromClusterScope(c *cliclient.MasterClient, clusterNameOrID string) (string, error) { clusterResource, err := Lookup(c, clusterNameOrID, managementClient.ClusterType) if err != nil { return "", err } return clusterResource.ID, nil } func lookupProjectIDFromProjectScope(c *cliclient.MasterClient, scope string) (string, error) { cluster, project := parseScope(scope) clusterResource, err := Lookup(c, cluster, managementClient.ClusterType) if err != nil { return "", err } if clusterResource.ID == cluster { // Lookup by ID projectResource, err := Lookup(c, scope, managementClient.ProjectType) if err != nil { return "", err } return projectResource.ID, nil } // Lookup by clusterName:projectName projectResource, err := Lookup(c, project, managementClient.ProjectType) if err != nil { return "", err } return projectResource.ID, nil } func toMultiClusterAppAnswers(c *cliclient.MasterClient, answers, answersSetString map[string]string) ([]managementClient.Answer, error) { answerMap := make(map[string]scopeAnswers) var answerSlice []managementClient.Answer if err := setValueInAnswerMapByScope(c, answerMap, answers, "Answers"); err != nil { return nil, err } if err := setValueInAnswerMapByScope(c, answerMap, answersSetString, "AnswersSetString"); err != nil { return nil, err } for k, v := range answerMap { answer := managementClient.Answer{ Values: v.Answers, ValuesSetString: v.AnswersSetString, } if strings.Contains(k, ":") { answer.ProjectID = k } else if k != "" { answer.ClusterID = k } answerSlice = append(answerSlice, answer) } return answerSlice, nil } func setValueInAnswerMapByScope(c *cliclient.MasterClient, answerMap map[string]scopeAnswers, inputAnswers map[string]string, scopeAnswersFieldStr string) error { for k, v := range inputAnswers { switch parts := strings.SplitN(k, ":", 3); { case len(parts) == 1: // Global scope setValueInAnswerMap(answerMap, "", "", scopeAnswersFieldStr, k, v) case len(parts) == 2: // Cluster scope clusterNameOrID := parts[0] clusterID, err := lookupClusterIDFromClusterScope(c, clusterNameOrID) if err != nil { return err } setValueInAnswerMap(answerMap, clusterNameOrID, clusterID, scopeAnswersFieldStr, parts[1], v) case len(parts) == 3: // Project scope projectScope := concatScope(parts[0], parts[1]) projectID, err := lookupProjectIDFromProjectScope(c, projectScope) if err != nil { return err } setValueInAnswerMap(answerMap, projectScope, projectID, scopeAnswersFieldStr, parts[2], v) } } return nil } func setValueInAnswerMap(answerMap map[string]scopeAnswers, scope, scopeID, fieldNameToUpdate, key, value string) { var exist bool if answerMap[scopeID].Answers == nil && answerMap[scopeID].AnswersSetString == nil { answerMap[scopeID] = scopeAnswers{ Answers: make(map[string]string), AnswersSetString: make(map[string]string), } } scopeAnswersStruct := answerMap[scopeID] scopeAnswersMap := reflect.ValueOf(&scopeAnswersStruct).Elem().FieldByName(fieldNameToUpdate) for _, k := range scopeAnswersMap.MapKeys() { if reflect.ValueOf(key) == k { exist = true break } } if exist { // It is possible that there are different forms of the same answer key in aggregated answers // In this case, name format from users overrides id format from existing app answers. if scope != scopeID { scopeAnswersMap.SetMapIndex(reflect.ValueOf(key), reflect.ValueOf(value)) } } else { scopeAnswersMap.SetMapIndex(reflect.ValueOf(key), reflect.ValueOf(value)) } } func fromMultiClusterAppAnswers(answerSlice []managementClient.Answer) (map[string]string, map[string]string) { answers := make(map[string]string) answersSetString := make(map[string]string) for _, answer := range answerSlice { for k, v := range answer.Values { scopedKey := getAnswerScopedKey(answer, k) answers[scopedKey] = v } for k, v := range answer.ValuesSetString { scopedKey := getAnswerScopedKey(answer, k) answersSetString[scopedKey] = v } } return answers, answersSetString } func getAnswerScopedKey(answer managementClient.Answer, key string) string { scope := "" if answer.ProjectID != "" { scope = answer.ProjectID } else if answer.ClusterID != "" { scope = answer.ClusterID } scopedKey := key if scope != "" { scopedKey = concatScope(scope, key) } return scopedKey } func addMcappTargetProject(ctx *cli.Context) error { if len(ctx.Args()) < 2 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } _, app, err := searchForMcapp(c, ctx.Args().First()) if err != nil { return err } input, err := getTargetInput(ctx, c) if err != nil { return err } if err := c.ManagementClient.MultiClusterApp.ActionAddProjects(app, input); err != nil { return err } return nil } func deleteMcappTargetProject(ctx *cli.Context) error { if len(ctx.Args()) < 2 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } _, app, err := searchForMcapp(c, ctx.Args().First()) if err != nil { return err } input, err := getTargetInput(ctx, c) if err != nil { return err } return c.ManagementClient.MultiClusterApp.ActionRemoveProjects(app, input) } func getTargetInput(ctx *cli.Context, c *cliclient.MasterClient) (*managementClient.UpdateMultiClusterAppTargetsInput, error) { targets := ctx.Args()[1:] projectIDs, err := lookupProjectIDsFromTargets(c, targets) if err != nil { return nil, err } answers, answersSetString, err := processAnswerUpdates(ctx, nil, nil) if err != nil { return nil, err } mcaAnswers, err := toMultiClusterAppAnswers(c, answers, answersSetString) if err != nil { return nil, err } input := &managementClient.UpdateMultiClusterAppTargetsInput{ Projects: projectIDs, Answers: mcaAnswers, } return input, nil } func addMcappMember(ctx *cli.Context) error { if len(ctx.Args()) < 3 { return cli.ShowSubcommandHelp(ctx) } appName := ctx.Args().First() accessType := strings.ToLower(ctx.Args().Get(1)) memberNames := ctx.Args()[2:] if !slice.ContainsString(memberAccessTypes, accessType) { return fmt.Errorf("invalid access type %q, supported values are \"owner\",\"member\" and \"read-only\"", accessType) } c, err := GetClient(ctx) if err != nil { return err } _, app, err := searchForMcapp(c, appName) if err != nil { return err } members, err := addMembersByNames(ctx, c, app.Members, memberNames, accessType) if err != nil { return err } update := make(map[string]interface{}) update["members"] = members update["roles"] = app.Roles _, err = c.ManagementClient.MultiClusterApp.Update(app, update) return err } func deleteMcappMember(ctx *cli.Context) error { if len(ctx.Args()) < 2 { return cli.ShowSubcommandHelp(ctx) } appName := ctx.Args().First() memberNames := ctx.Args()[1:] c, err := GetClient(ctx) if err != nil { return err } _, app, err := searchForMcapp(c, appName) if err != nil { return err } members, err := deleteMembersByNames(ctx, c, app.Members, memberNames) if err != nil { return err } update := make(map[string]interface{}) update["members"] = members update["roles"] = app.Roles _, err = c.ManagementClient.MultiClusterApp.Update(app, update) return err } func showMultiClusterApp(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } resource, app, err := searchForMcapp(c, ctx.Args().First()) if err != nil { return err } err = outputMultiClusterAppRevisions(ctx, c, resource, app) if err != nil { return err } fmt.Println() err = outputMultiClusterAppVersions(ctx, c, app) if err != nil { return err } if ctx.Bool("show-roles") { fmt.Println() err = outputMultiClusterAppRoles(ctx, c, app) if err != nil { return err } } return nil } func listMultiClusterAppMembers(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } _, app, err := searchForMcapp(c, ctx.Args().First()) if err != nil { return err } return outputMembers(ctx, c, app.Members) } func listMultiClusterAppAnswers(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } _, app, err := searchForMcapp(c, ctx.Args().First()) if err != nil { return err } return outputMultiClusterAppAnswers(ctx, c, app) } func searchForMcapp(c *cliclient.MasterClient, name string) (*types.Resource, *managementClient.MultiClusterApp, error) { resource, err := Lookup(c, name, managementClient.MultiClusterAppType) if err != nil { return nil, nil, err } app, err := c.ManagementClient.MultiClusterApp.ByID(resource.ID) if err != nil { return nil, nil, err } return resource, app, nil } func outputMultiClusterAppVersions(ctx *cli.Context, c *cliclient.MasterClient, app *managementClient.MultiClusterApp) error { templateVersion, err := c.ManagementClient.TemplateVersion.ByID(app.TemplateVersionID) if err != nil { return err } ver, err := getRancherServerVersion(c) if err != nil { return err } filter := defaultListOpts(ctx) filter.Filters["rancherVersion"] = ver template := &managementClient.Template{} if err := c.ManagementClient.Ops.DoGet(templateVersion.Links["template"], filter, template); err != nil { return err } writer := NewTableWriter([][]string{ {"CURRENT", "Current"}, {"VERSION", "Version"}, }, ctx) defer writer.Close() sortedVersions, err := sortTemplateVersions(template) if err != nil { return err } for _, version := range sortedVersions { var current string if version.String() == templateVersion.Version { current = "*" } writer.Write(&VersionData{ Current: current, Version: version.String(), }) } return writer.Err() } func outputMultiClusterAppRevisions(ctx *cli.Context, c *cliclient.MasterClient, resource *types.Resource, app *managementClient.MultiClusterApp) error { revisions := &managementClient.MultiClusterAppRevisionCollection{} if err := c.ManagementClient.GetLink(*resource, "revisions", revisions); err != nil { return err } var sorted revSlice for _, rev := range revisions.Data { parsedTime, err := time.Parse(time.RFC3339, rev.Created) if err != nil { return err } sorted = append(sorted, revision{Name: rev.Name, Created: parsedTime}) } sort.Sort(sorted) writer := NewTableWriter([][]string{ {"CURRENT", "Current"}, {"REVISION", "Name"}, {"CREATED", "Human"}, }, ctx) defer writer.Close() for _, rev := range sorted { if rev.Name == app.Status.RevisionID { rev.Current = "*" } rev.Human = rev.Created.Format("02 Jan 2006 15:04:05 MST") writer.Write(rev) } return writer.Err() } func outputMultiClusterAppRoles(ctx *cli.Context, c *cliclient.MasterClient, app *managementClient.MultiClusterApp) error { writer := NewTableWriter([][]string{ {"ROLE_NAME", "Name"}, }, ctx) defer writer.Close() for _, r := range app.Roles { writer.Write(map[string]string{"Name": r}) } return writer.Err() } func outputMultiClusterAppAnswers(ctx *cli.Context, c *cliclient.MasterClient, app *managementClient.MultiClusterApp) error { writer := NewTableWriter([][]string{ {"SCOPE", "Scope"}, {"QUESTION", "Question"}, {"ANSWER", "Answer"}, }, ctx) defer writer.Close() answers := app.Answers // Sort answers by scope in the Global-Cluster-Project order sort.Slice(answers, func(i, j int) bool { if answers[i].ClusterID == "" && answers[i].ProjectID == "" { return true } else if answers[i].ClusterID != "" && answers[j].ProjectID != "" { return true } return false }) var scope string for _, r := range answers { scope = "Global" if r.ClusterID != "" { cluster, err := getClusterByID(c, r.ClusterID) if err != nil { return err } scope = fmt.Sprintf("All projects in cluster %s", cluster.Name) } else if r.ProjectID != "" { project, err := getProjectByID(c, r.ProjectID) if err != nil { return err } scope = fmt.Sprintf("Project %s", project.Name) } for key, value := range r.Values { writer.Write(map[string]string{ "Scope": scope, "Question": key, "Answer": value, }) } for key, value := range r.ValuesSetString { writer.Write(map[string]string{ "Scope": scope, "Question": key, "Answer": fmt.Sprintf("\"%s\"", value), }) } } return writer.Err() } func globalTemplateLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } filter := defaultListOpts(ctx) if ctx.String("catalog") != "" { resource, err := Lookup(c, ctx.String("catalog"), managementClient.CatalogType) if err != nil { return err } filter.Filters["catalogId"] = resource.ID } collection, err := c.ManagementClient.Template.List(filter) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Template.Name"}, {"CATEGORY", "Category"}, }, ctx) defer writer.Close() for _, item := range collection.Data { // Skip non-global catalogs if item.CatalogID == "" { continue } writer.Write(&TemplateData{ ID: item.ID, Template: item, Category: strings.Join(item.Categories, ","), }) } return writer.Err() } func concatScope(scope, key string) string { return fmt.Sprintf("%s:%s", scope, key) } func parseScope(ref string) (scope string, key string) { parts := strings.SplitN(ref, ":", 2) if len(parts) == 1 { return "", parts[0] } return parts[0], parts[1] } 0707010000001F000081A4000000000000000000000001673C868500000AB0000000000000000000000000000000000000002F00000000rancher-cli-2.10.0/cmd/multiclusterapp_test.gopackage cmd import ( "testing" client "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/stretchr/testify/assert" ) func TestFromMultiClusterAppAnswers(t *testing.T) { assert := assert.New(t) answerSlice := []client.Answer{ { ProjectID: "c-1:p-1", Values: map[string]string{ "var-1": "val1", "var-2": "val2", }, ValuesSetString: map[string]string{ "str-var-1": "str-val1", "str-var-2": "str-val2", }, }, { ProjectID: "c-1:p-2", Values: map[string]string{ "var-3": "val3", }, ValuesSetString: map[string]string{ "str-var-3": "str-val3", }, }, { ClusterID: "c-1", Values: map[string]string{ "var-4": "val4", }, ValuesSetString: map[string]string{ "str-var-4": "str-val4", }, }, { ClusterID: "c-2", Values: map[string]string{ "var-5": "val5", }, ValuesSetString: map[string]string{ "str-var-5": "str-val5", }, }, { Values: map[string]string{ "var-6": "val6", }, ValuesSetString: map[string]string{ "str-var-6": "str-val6", }, }, } answers, answersSetString := fromMultiClusterAppAnswers(answerSlice) assert.Equal(len(answers), 6) assert.Equal(answers["c-1:p-1:var-1"], "val1") assert.Equal(answers["c-1:p-1:var-2"], "val2") assert.Equal(answers["c-1:p-2:var-3"], "val3") assert.Equal(answers["c-1:var-4"], "val4") assert.Equal(answers["c-2:var-5"], "val5") assert.Equal(answers["var-6"], "val6") assert.Equal(len(answersSetString), 6) assert.Equal(answersSetString["c-1:p-1:str-var-1"], "str-val1") assert.Equal(answersSetString["c-1:p-1:str-var-2"], "str-val2") assert.Equal(answersSetString["c-1:p-2:str-var-3"], "str-val3") assert.Equal(answersSetString["c-1:str-var-4"], "str-val4") assert.Equal(answersSetString["c-2:str-var-5"], "str-val5") assert.Equal(answersSetString["str-var-6"], "str-val6") } func TestGetReadableTargetNames(t *testing.T) { assert := assert.New(t) clusters := map[string]client.Cluster{ "c-1": { Name: "cn-1", }, "c-2": { Name: "cn-2", }, } projects := map[string]client.Project{ "c-1:p-1": { Name: "pn-1", }, "c-1:p-2": { Name: "pn-2", }, "c-2:p-3": { Name: "pn-3", }, "c-2:p-4": { Name: "pn-4", }, } targets := []client.Target{ { ProjectID: "c-1:p-1", }, { ProjectID: "c-1:p-2", }, { ProjectID: "c-2:p-3", }, } result := getReadableTargetNames(clusters, projects, targets) assert.Contains(result, "cn-1:pn-1") assert.Contains(result, "cn-1:pn-2") assert.Contains(result, "cn-2:pn-3") targets = []client.Target{ { ProjectID: "c-0:p-0", }, } result = getReadableTargetNames(clusters, projects, targets) assert.Contains(result, "c-0:p-0") } 07070100000020000081A4000000000000000000000001673C868500001464000000000000000000000000000000000000002400000000rancher-cli-2.10.0/cmd/namespace.gopackage cmd import ( "fmt" "github.com/pkg/errors" "github.com/rancher/cli/cliclient" clusterClient "github.com/rancher/rancher/pkg/client/generated/cluster/v3" "github.com/urfave/cli" ) type NamespaceData struct { ID string Namespace clusterClient.Namespace } func NamespaceCommand() cli.Command { return cli.Command{ Name: "namespaces", Aliases: []string{"namespace"}, Usage: "Operations on namespaces", Action: defaultAction(namespaceLs), Flags: []cli.Flag{ quietFlag, }, Subcommands: []cli.Command{ { Name: "ls", Usage: "List namespaces", Description: "\nLists all namespaces in the current project.", ArgsUsage: "None", Action: namespaceLs, Flags: []cli.Flag{ cli.BoolFlag{ Name: "all-namespaces", Usage: "List all namespaces in the current cluster", }, cli.StringFlag{ Name: "format", Usage: "'json', 'yaml' or Custom format: '{{.Namespace.ID}} {{.Namespace.Name}}'", }, quietFlag, }, }, { Name: "create", Usage: "Create a namespace", Description: "\nCreates a namespace in the current cluster.", ArgsUsage: "[NEWPROJECTNAME...]", Action: namespaceCreate, Flags: []cli.Flag{ cli.StringFlag{ Name: "description", Usage: "Description to apply to the namespace", }, }, }, { Name: "delete", Aliases: []string{"rm"}, Usage: "Delete a namespace by name or ID", ArgsUsage: "[NAMESPACEID NAMESPACENAME]", Action: namespaceDelete, }, { Name: "move", Usage: "Move a namespace to a different project", ArgsUsage: "[NAMESPACEID/NAMESPACENAME PROJECTID]", Action: namespaceMove, }, }, } } func namespaceLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } collection, err := getNamespaceList(ctx, c) if err != nil { return err } if !ctx.Bool("all-namespaces") { var projectNamespaces []clusterClient.Namespace for _, namespace := range collection.Data { if namespace.ProjectID == c.UserConfig.Project { projectNamespaces = append(projectNamespaces, namespace) } } collection.Data = projectNamespaces } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Namespace.Name"}, {"STATE", "Namespace.State"}, {"PROJECT", "Namespace.ProjectID"}, {"DESCRIPTION", "Namespace.Description"}, }, ctx) defer writer.Close() for _, item := range collection.Data { writer.Write(&NamespaceData{ ID: item.ID, Namespace: item, }) } return writer.Err() } func namespaceCreate(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } newNamespace := &clusterClient.Namespace{ Name: ctx.Args().First(), ProjectID: c.UserConfig.Project, Description: ctx.String("description"), } _, err = c.ClusterClient.Namespace.Create(newNamespace) if err != nil { return err } return nil } func namespaceDelete(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } for _, arg := range ctx.Args() { resource, err := Lookup(c, arg, "namespace") if err != nil { return err } namespace, err := getNamespaceByID(c, resource.ID) if err != nil { return err } err = c.ClusterClient.Namespace.Delete(namespace) if err != nil { return err } } return nil } func namespaceMove(ctx *cli.Context) error { if ctx.NArg() < 2 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), "namespace") if err != nil { return err } namespace, err := getNamespaceByID(c, resource.ID) if err != nil { return err } projResource, err := Lookup(c, ctx.Args().Get(1), "project") if err != nil { return err } proj, err := getProjectByID(c, projResource.ID) if err != nil { return err } if anno, ok := namespace.Annotations["cattle.io/appIds"]; ok && anno != "" { return errors.Errorf("Namespace %v cannot be moved", namespace.Name) } if _, ok := namespace.Actions["move"]; ok { move := &clusterClient.NamespaceMove{ ProjectID: proj.ID, } return c.ClusterClient.Namespace.ActionMove(namespace, move) } update := make(map[string]string) update["projectId"] = proj.ID _, err = c.ClusterClient.Namespace.Update(namespace, update) if err != nil { return err } return nil } func getNamespaceList( ctx *cli.Context, c *cliclient.MasterClient, ) (*clusterClient.NamespaceCollection, error) { collection, err := c.ClusterClient.Namespace.List(defaultListOpts(ctx)) if err != nil { return nil, err } return collection, nil } func getNamespaceByID( c *cliclient.MasterClient, namespaceID string, ) (*clusterClient.Namespace, error) { namespace, err := c.ClusterClient.Namespace.ByID(namespaceID) if err != nil { return nil, fmt.Errorf("no namespace found with the ID [%s], run "+ "`rancher namespaces` to see available namespaces: %s", namespaceID, err) } return namespace, nil } 07070100000021000081A4000000000000000000000001673C868500000F7D000000000000000000000000000000000000001F00000000rancher-cli-2.10.0/cmd/node.gopackage cmd import ( "fmt" "github.com/sirupsen/logrus" "github.com/rancher/cli/cliclient" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/urfave/cli" ) type NodeData struct { ID string Node managementClient.Node Name string Pool string } func NodeCommand() cli.Command { return cli.Command{ Name: "nodes", Aliases: []string{"node"}, Usage: "Operations on nodes", Action: defaultAction(nodeLs), Subcommands: []cli.Command{ { Name: "ls", Usage: "List nodes", Description: "\nLists all nodes in the current cluster.", ArgsUsage: "None", Action: nodeLs, Flags: []cli.Flag{ cli.StringFlag{ Name: "format", Usage: "'json', 'yaml' or Custom format: '{{.Node.ID}} {{.Node.Name}}'", }, quietFlag, }, }, { Name: "delete", Aliases: []string{"rm"}, Usage: "Delete a node by ID", ArgsUsage: "[NODEID NODENAME]", Action: nodeDelete, }, }, } } func nodeLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } collection, err := getNodesList(ctx, c, c.UserConfig.FocusedCluster()) if err != nil { return err } nodePools, err := getNodePools(ctx, c) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Name"}, {"STATE", "Node.State"}, {"POOL", "Pool"}, {"DESCRIPTION", "Node.Description"}, }, ctx) defer writer.Close() for _, item := range collection.Data { writer.Write(&NodeData{ ID: item.ID, Node: item, Name: getNodeName(item), Pool: getNodePoolName(item, nodePools), }) } return writer.Err() } func nodeDelete(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } for _, arg := range ctx.Args() { resource, err := Lookup(c, arg, "node") if err != nil { return err } node, err := getNodeByID(ctx, c, resource.ID) if err != nil { return err } if _, ok := node.Links["remove"]; !ok { logrus.Warnf("node %v is externally managed and must be deleted "+ "through the provider", getNodeName(node)) continue } err = c.ManagementClient.Node.Delete(&node) if err != nil { return err } } return nil } func getNodesList( ctx *cli.Context, c *cliclient.MasterClient, clusterID string, ) (*managementClient.NodeCollection, error) { filter := defaultListOpts(ctx) filter.Filters["clusterId"] = clusterID collection, err := c.ManagementClient.Node.List(filter) if err != nil { return nil, err } return collection, nil } func getNodeByID( ctx *cli.Context, c *cliclient.MasterClient, nodeID string, ) (managementClient.Node, error) { nodeCollection, err := getNodesList(ctx, c, c.UserConfig.FocusedCluster()) if err != nil { return managementClient.Node{}, err } for _, node := range nodeCollection.Data { if node.ID == nodeID { return node, nil } } return managementClient.Node{}, fmt.Errorf("no node found with the ID [%s], run "+ "`rancher nodes` to see available nodes", nodeID) } func getNodeName(node managementClient.Node) string { if node.Name != "" { return node.Name } else if node.NodeName != "" { return node.NodeName } else if node.RequestedHostname != "" { return node.RequestedHostname } return node.ID } func getNodePools( ctx *cli.Context, c *cliclient.MasterClient, ) (*managementClient.NodePoolCollection, error) { filter := defaultListOpts(ctx) filter.Filters["clusterId"] = c.UserConfig.FocusedCluster() collection, err := c.ManagementClient.NodePool.List(filter) if err != nil { return nil, err } return collection, nil } func getNodePoolName(node managementClient.Node, pools *managementClient.NodePoolCollection) string { for _, pool := range pools.Data { if node.NodePoolID == pool.ID { return pool.HostnamePrefix } } return "" } 07070100000022000081A4000000000000000000000001673C868500001FF8000000000000000000000000000000000000002200000000rancher-cli-2.10.0/cmd/project.gopackage cmd import ( "fmt" "github.com/rancher/cli/cliclient" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/urfave/cli" ) type ProjectData struct { ID string Project managementClient.Project } func ProjectCommand() cli.Command { return cli.Command{ Name: "projects", Aliases: []string{"project"}, Usage: "Operations on projects", Action: defaultAction(projectLs), Subcommands: []cli.Command{ { Name: "ls", Usage: "List projects", Description: "\nLists all projects in the current cluster.", ArgsUsage: "None", Action: projectLs, Flags: []cli.Flag{ cli.StringFlag{ Name: "format", Usage: "'json', 'yaml' or Custom format: '{{.Project.ID}} {{.Project.Name}}'", }, quietFlag, }, }, { Name: "create", Usage: "Create a project", Description: "\nCreates a project in the current cluster.", ArgsUsage: "[NEWPROJECTNAME...]", Action: projectCreate, Flags: []cli.Flag{ cli.StringFlag{ Name: "cluster", Usage: "Cluster ID to create the project in", }, cli.StringFlag{ Name: "description", Usage: "Description to apply to the project", }, }, }, { Name: "delete", Aliases: []string{"rm"}, Usage: "Delete a project by ID", ArgsUsage: "[PROJECTID PROJECTNAME]", Action: projectDelete, }, { Name: "add-member-role", Usage: "Add a member to the project", Action: addProjectMemberRoles, Description: "Examples:\n #Create the roles of 'create-ns' and 'services-manage' for a user named 'user1'\n rancher project add-member-role user1 create-ns services-manage\n", ArgsUsage: "[USERNAME, ROLE...]", Flags: []cli.Flag{ cli.StringFlag{ Name: "project-id", Usage: "Optional project ID to apply this change to, defaults to the current context", }, }, }, { Name: "delete-member-role", Usage: "Delete a member from the project", Action: deleteProjectMemberRoles, Description: "Examples:\n #Delete the roles of 'create-ns' and 'services-manage' for a user named 'user1'\n rancher project delete-member-role user1 create-ns services-manage\n", ArgsUsage: "[USERNAME, ROLE...]", Flags: []cli.Flag{ cli.StringFlag{ Name: "project-id", Usage: "Optional project ID to apply this change to, defaults to the current context", }, }, }, { Name: "list-roles", Usage: "List all available roles for a project", Action: listProjectRoles, }, { Name: "list-members", Usage: "List current members of the project", Action: listProjectMembers, Flags: []cli.Flag{ cli.StringFlag{ Name: "project-id", Usage: "Optional project ID to list members for, defaults to the current context", }, }, }, }, } } func projectLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } collection, err := getProjectList(ctx, c) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Project.Name"}, {"STATE", "Project.State"}, {"DESCRIPTION", "Project.Description"}, }, ctx) defer writer.Close() for _, item := range collection.Data { writer.Write(&ProjectData{ ID: item.ID, Project: item, }) } return writer.Err() } func projectCreate(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } clusterID := c.UserConfig.FocusedCluster() if ctx.String("cluster") != "" { resource, err := Lookup(c, ctx.String("cluster"), "cluster") if err != nil { return err } clusterID = resource.ID } newProj := &managementClient.Project{ Name: ctx.Args().First(), ClusterID: clusterID, Description: ctx.String("description"), } _, err = c.ManagementClient.Project.Create(newProj) if err != nil { return err } return nil } func projectDelete(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowSubcommandHelp(ctx) } c, err := GetClient(ctx) if err != nil { return err } for _, arg := range ctx.Args() { resource, err := Lookup(c, arg, "project") if err != nil { return err } project, err := getProjectByID(c, resource.ID) if err != nil { return err } err = c.ManagementClient.Project.Delete(project) if err != nil { return err } } return nil } func addProjectMemberRoles(ctx *cli.Context) error { if len(ctx.Args()) < 2 { return cli.ShowSubcommandHelp(ctx) } memberName := ctx.Args().First() roles := ctx.Args()[1:] c, err := GetClient(ctx) if err != nil { return err } member, err := searchForMember(ctx, c, memberName) if err != nil { return err } projectID := c.UserConfig.Project if ctx.String("project-id") != "" { projectID = ctx.String("project-id") } for _, role := range roles { rtb := managementClient.ProjectRoleTemplateBinding{ ProjectID: projectID, RoleTemplateID: role, } if member.PrincipalType == "user" { rtb.UserPrincipalID = member.ID } else { rtb.GroupPrincipalID = member.ID } _, err = c.ManagementClient.ProjectRoleTemplateBinding.Create(&rtb) if err != nil { return err } } return nil } func deleteProjectMemberRoles(ctx *cli.Context) error { if len(ctx.Args()) < 2 { return cli.ShowSubcommandHelp(ctx) } memberName := ctx.Args().First() roles := ctx.Args()[1:] c, err := GetClient(ctx) if err != nil { return err } member, err := searchForMember(ctx, c, memberName) if err != nil { return err } projectID := c.UserConfig.Project if ctx.String("project-id") != "" { projectID = ctx.String("project-id") } for _, role := range roles { filter := defaultListOpts(ctx) filter.Filters["projectId"] = projectID filter.Filters["roleTemplateId"] = role if member.PrincipalType == "user" { filter.Filters["userPrincipalId"] = member.ID } else { filter.Filters["groupPrincipalId"] = member.ID } bindings, err := c.ManagementClient.ProjectRoleTemplateBinding.List(filter) if err != nil { return err } for _, binding := range bindings.Data { err = c.ManagementClient.ProjectRoleTemplateBinding.Delete(&binding) if err != nil { return err } } } return nil } func listProjectRoles(ctx *cli.Context) error { return listRoles(ctx, "project") } func listProjectMembers(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } projectID := c.UserConfig.Project if ctx.String("project-id") != "" { projectID = ctx.String("project-id") } filter := defaultListOpts(ctx) filter.Filters["projectId"] = projectID bindings, err := c.ManagementClient.ProjectRoleTemplateBinding.List(filter) if err != nil { return err } userFilter := defaultListOpts(ctx) users, err := c.ManagementClient.User.List(userFilter) if err != nil { return err } userMap := usersToNameMapping(users.Data) var b []RoleTemplateBinding for _, binding := range bindings.Data { parsedTime, err := createdTimetoHuman(binding.Created) if err != nil { return err } b = append(b, RoleTemplateBinding{ ID: binding.ID, User: userMap[binding.UserID], Role: binding.RoleTemplateID, Created: parsedTime, }) } return listRoleTemplateBindings(ctx, b) } func getProjectList( ctx *cli.Context, c *cliclient.MasterClient, ) (*managementClient.ProjectCollection, error) { filter := defaultListOpts(ctx) filter.Filters["clusterId"] = c.UserConfig.FocusedCluster() collection, err := c.ManagementClient.Project.List(filter) if err != nil { return nil, err } return collection, nil } func getProjectByID( c *cliclient.MasterClient, projectID string, ) (*managementClient.Project, error) { project, err := c.ManagementClient.Project.ByID(projectID) if err != nil { return nil, fmt.Errorf("no project found with the ID [%s], run "+ "`rancher projects` to see available projects: %s", projectID, err) } return project, nil } 07070100000023000081A4000000000000000000000001673C868500000ABE000000000000000000000000000000000000001D00000000rancher-cli-2.10.0/cmd/ps.gopackage cmd import ( "strconv" "github.com/rancher/cli/cliclient" "github.com/urfave/cli" "golang.org/x/text/cases" "golang.org/x/text/language" ) type PSHolder struct { NameSpace string Name string Type string State string Image string Scale string } func PsCommand() cli.Command { return cli.Command{ Name: "ps", Usage: "Show workloads in a project", Description: `Show information on the workloads in a project. Defaults to the current context. Examples: # Show workloads in the current context $ rancher ps # Show workloads in a specific project and output the results in yaml $ rancher ps --project projectFoo --format yaml `, Action: psLs, Flags: []cli.Flag{ cli.StringFlag{ Name: "project", Usage: "Optional project to show workloads for", }, cli.StringFlag{ Name: "format", Usage: "'json', 'yaml' or Custom format: '{{.Name}} {{.Image}}'", }, }, } } func psLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } if ctx.String("project") != "" { //Verify the project given is valid resource, err := Lookup(c, ctx.String("project"), "project") if err != nil { return err } sc, err := lookupConfig(ctx) if err != nil { return err } sc.Project = resource.ID projClient, err := cliclient.NewProjectClient(sc) if err != nil { return err } c.ProjectClient = projClient.ProjectClient } workLoads, err := c.ProjectClient.Workload.List(defaultListOpts(ctx)) if err != nil { return err } wlWriter := NewTableWriter([][]string{ {"NAMESPACE", "NameSpace"}, {"NAME", "Name"}, {"TYPE", "Type"}, {"STATE", "State"}, {"IMAGE", "Image"}, {"SCALE", "Scale"}, }, ctx) defer wlWriter.Close() titleCaser := cases.Title(language.Und) for _, item := range workLoads.Data { var scale string if item.Scale == nil { scale = "-" } else { scale = strconv.Itoa(int(*item.Scale)) } item.Type = titleCaser.String(item.Type) wlWriter.Write(&PSHolder{ NameSpace: item.NamespaceId, Name: item.Name, Type: item.Type, State: item.State, Image: item.Containers[0].Image, Scale: scale, }) } opts := defaultListOpts(ctx) opts.Filters["workloadId"] = "" orphanPods, err := c.ProjectClient.Pod.List(opts) if err != nil { return err } if len(orphanPods.Data) > 0 { for _, item := range orphanPods.Data { item.Type = titleCaser.String(item.Type) wlWriter.Write(&PSHolder{ NameSpace: item.NamespaceId, Name: item.Name, Type: item.Type, State: item.State, Image: item.Containers[0].Image, Scale: "Standalone", // a single pod doesn't have scale }) } } return nil } 07070100000024000081A4000000000000000000000001673C86850000176D000000000000000000000000000000000000002100000000rancher-cli-2.10.0/cmd/server.gopackage cmd import ( "bufio" "fmt" "io" "os" "sort" "strconv" "strings" "github.com/pkg/errors" "github.com/rancher/cli/config" "github.com/sirupsen/logrus" "github.com/urfave/cli" "golang.org/x/exp/maps" ) type serverData struct { Index int Current string Name string URL string } // ServerCommand defines the 'rancher server' sub-commands func ServerCommand() cli.Command { cfg := &config.Config{} return cli.Command{ Name: "server", Usage: "Operations for the server", Description: `Switch or view the server currently in focus. `, Before: loadAndValidateConfig(cfg), Subcommands: []cli.Command{ { Name: "current", Usage: "Display the current server", Action: func(ctx *cli.Context) error { return serverCurrent(ctx.App.Writer, cfg) }, }, { Name: "delete", Usage: "Delete a server from the local config", ArgsUsage: "[SERVER_NAME]", Description: ` The server arg is optional, if not passed in a list of available servers will be displayed and one can be selected. `, Action: func(ctx *cli.Context) error { serverName, err := getSelectedServer(ctx, cfg) if err != nil { return err } return serverDelete(cfg, serverName) }, }, { Name: "ls", Usage: "List all servers", Action: func(ctx *cli.Context) error { format := ctx.String("format") return serverLs(ctx.App.Writer, cfg, format) }, }, { Name: "switch", Usage: "Switch to a new server", ArgsUsage: "[SERVER_NAME]", Description: ` The server arg is optional, if not passed in a list of available servers will be displayed and one can be selected. `, Action: func(ctx *cli.Context) error { serverName, err := getSelectedServer(ctx, cfg) if err != nil { return err } return serverSwitch(cfg, serverName) }, }, }, } } // serverCurrent command to display the name of the current server in the local config func serverCurrent(out io.Writer, cfg *config.Config) error { serverName := cfg.CurrentServer currentServer, found := cfg.Servers[serverName] if !found { return errors.New("Current server not set") } fmt.Fprintf(out, "Name: %s URL: %s\n", serverName, currentServer.URL) return nil } // serverDelete command to delete a server from the local config func serverDelete(cfg *config.Config, serverName string) error { _, ok := cfg.Servers[serverName] if !ok { return errors.New("Server not found") } delete(cfg.Servers, serverName) if cfg.CurrentServer == serverName { cfg.CurrentServer = "" } err := cfg.Write() if err != nil { return err } logrus.Infof("Server %s deleted", serverName) return nil } // serverLs command to list rancher servers from the local config func serverLs(out io.Writer, cfg *config.Config, format string) error { writerConfig := &TableWriterConfig{ Writer: out, Format: format, } writer := NewTableWriterWithConfig([][]string{ {"CURRENT", "Current"}, {"NAME", "Name"}, {"URL", "URL"}, }, writerConfig) defer writer.Close() servers := getServers(cfg) for _, server := range servers { writer.Write(server) } return writer.Err() } // serverSwitch will alter and write the config to switch rancher server. func serverSwitch(cf *config.Config, serverName string) error { _, ok := cf.Servers[serverName] if !ok { return errors.New("Server not found") } if len(cf.Servers[serverName].Project) == 0 { logrus.Warn("No context set; some commands will not work. Run 'rancher context switch'") } cf.CurrentServer = serverName err := cf.Write() if err != nil { return err } return nil } // getSelectedServer will get the selected server if provided as argument, // or it will prompt the user to select one. func getSelectedServer(ctx *cli.Context, cfg *config.Config) (string, error) { serverName := ctx.Args().First() if serverName != "" { return serverName, nil } return serverFromInput(ctx, cfg) } // serverFromInput displays the list of servers from the local config and // prompt the user to select one. func serverFromInput(ctx *cli.Context, cf *config.Config) (string, error) { servers := getServers(cf) if err := displayListServers(ctx, servers); err != nil { return "", err } fmt.Print("Select a Server:") reader := bufio.NewReader(os.Stdin) errMessage := fmt.Sprintf("Invalid input, enter a number between 1 and %v: ", len(servers)) var selection int for { input, err := reader.ReadString('\n') if err != nil { return "", err } input = strings.TrimSpace(input) if input != "" { i, err := strconv.Atoi(input) if err != nil { fmt.Print(errMessage) continue } if i <= len(servers) && i != 0 { selection = i - 1 break } fmt.Print(errMessage) continue } } return servers[selection].Name, nil } // displayListServers displays the list of rancher servers func displayListServers(ctx *cli.Context, servers []*serverData) error { writer := NewTableWriter([][]string{ {"INDEX", "Index"}, {"NAME", "Name"}, {"URL", "URL"}, }, ctx) defer writer.Close() for _, server := range servers { writer.Write(server) } return writer.Err() } // getServers returns an ordered slice (by name) of serverData func getServers(cfg *config.Config) []*serverData { serverNames := maps.Keys(cfg.Servers) sort.Strings(serverNames) servers := []*serverData{} for i, server := range serverNames { var current string if server == cfg.CurrentServer { current = "*" } servers = append(servers, &serverData{ Index: i + 1, Name: server, Current: current, URL: cfg.Servers[server].URL, }) } return servers } func loadAndValidateConfig(cfg *config.Config) cli.BeforeFunc { return func(ctx *cli.Context) error { conf, err := loadConfig(ctx) if err != nil { return err } *cfg = conf if len(cfg.Servers) == 0 { return errors.New("no servers are currently configured") } return nil } } 07070100000025000081A4000000000000000000000001673C8685000019A2000000000000000000000000000000000000002600000000rancher-cli-2.10.0/cmd/server_test.gopackage cmd import ( "bytes" "os" "testing" "github.com/rancher/cli/config" "github.com/stretchr/testify/assert" ) func TestServerCurrentCommand(t *testing.T) { tt := []struct { name string config *config.Config expectedOutput string expectedErr string }{ { name: "existing current server set", config: newTestConfig(), expectedOutput: "Name: server1 URL: https://myserver-1.com\n", }, { name: "empty current server", config: func() *config.Config { cfg := newTestConfig() cfg.CurrentServer = "" return cfg }(), expectedErr: "Current server not set", }, { name: "non existing current server set", config: &config.Config{ CurrentServer: "notfound-server", Servers: map[string]*config.ServerConfig{ "my-server": {URL: "https://myserver.com"}, }, }, expectedErr: "Current server not set", }, } for _, tc := range tt { tc := tc t.Run(tc.name, func(t *testing.T) { t.Parallel() out := &bytes.Buffer{} err := serverCurrent(out, tc.config) if tc.expectedErr != "" { assert.EqualError(t, err, tc.expectedErr) } else { assert.NoError(t, err) } assert.Equal(t, tc.expectedOutput, out.String()) }) } } func TestServerDelete(t *testing.T) { tt := []struct { name string actualCurrentServer string serverToDelete string expectedCurrentServer string expectedErr string }{ { name: "delete a different server will delete it", actualCurrentServer: "server1", serverToDelete: "server3", expectedCurrentServer: "server1", }, { name: "delete the same server will blank the current", actualCurrentServer: "server1", serverToDelete: "server1", expectedCurrentServer: "", }, { name: "delete a non existing server", actualCurrentServer: "server1", serverToDelete: "server-nope", expectedCurrentServer: "server1", expectedErr: "Server not found", }, } for _, tc := range tt { tc := tc t.Run(tc.name, func(t *testing.T) { t.Parallel() tmpConfig, err := os.CreateTemp("", "*-rancher-config.json") assert.NoError(t, err) defer os.Remove(tmpConfig.Name()) // setup test config cfg := newTestConfig() cfg.Path = tmpConfig.Name() cfg.CurrentServer = tc.actualCurrentServer // do test and check resulting config err = serverDelete(cfg, tc.serverToDelete) if err != nil { assert.EqualError(t, err, tc.expectedErr) } else { assert.NoError(t, err) } assert.Equal(t, tc.expectedCurrentServer, cfg.CurrentServer) assert.Empty(t, cfg.Servers[tc.serverToDelete]) }) } } func TestServerSwitch(t *testing.T) { tt := []struct { name string actualCurrentServer string serverName string expectedCurrentServer string expectedErr string }{ { name: "switch to different server updates the current server", actualCurrentServer: "server1", serverName: "server3", expectedCurrentServer: "server3", }, { name: "switch to same server is no-op", actualCurrentServer: "server1", serverName: "server1", expectedCurrentServer: "server1", }, { name: "switch to non existing server", actualCurrentServer: "server1", serverName: "server-nope", expectedCurrentServer: "server1", expectedErr: "Server not found", }, { name: "switch to empty server fails", actualCurrentServer: "server1", serverName: "", expectedCurrentServer: "server1", expectedErr: "Server not found", }, } for _, tc := range tt { tc := tc t.Run(tc.name, func(t *testing.T) { t.Parallel() tmpConfig, err := os.CreateTemp("", "*-rancher-config.json") assert.NoError(t, err) defer os.Remove(tmpConfig.Name()) // setup test config cfg := newTestConfig() cfg.Path = tmpConfig.Name() cfg.CurrentServer = tc.actualCurrentServer // do test and check resulting config err = serverSwitch(cfg, tc.serverName) if err != nil { assert.EqualError(t, err, tc.expectedErr) } else { assert.NoError(t, err) } assert.Equal(t, tc.expectedCurrentServer, cfg.CurrentServer) }) } } func TestServerLs(t *testing.T) { tt := []struct { name string config *config.Config format string expectedOutput string expectedErr bool }{ { name: "list servers", expectedOutput: `CURRENT NAME URL * server1 https://myserver-1.com server2 https://myserver-2.com server3 https://myserver-3.com `, }, { name: "list empty config", config: &config.Config{}, format: "", expectedOutput: "CURRENT NAME URL\n", }, { name: "list servers with json format", format: "json", expectedOutput: `{"Index":1,"Current":"*","Name":"server1","URL":"https://myserver-1.com"} {"Index":2,"Current":"","Name":"server2","URL":"https://myserver-2.com"} {"Index":3,"Current":"","Name":"server3","URL":"https://myserver-3.com"} `, }, { name: "list servers with yaml format", format: "yaml", expectedOutput: `Current: '*' Index: 1 Name: server1 URL: https://myserver-1.com Current: "" Index: 2 Name: server2 URL: https://myserver-2.com Current: "" Index: 3 Name: server3 URL: https://myserver-3.com `, }, { name: "list servers with custom format", format: "{{.URL}}", expectedOutput: `https://myserver-1.com https://myserver-2.com https://myserver-3.com `, }, { name: "list servers with custom format", format: "{{.err}}", expectedErr: true, }, } for _, tc := range tt { tc := tc t.Run(tc.name, func(t *testing.T) { t.Parallel() out := &bytes.Buffer{} if tc.config == nil { tc.config = newTestConfig() } // do test and check resulting config err := serverLs(out, tc.config, tc.format) if tc.expectedErr { assert.Error(t, err) } else { assert.NoError(t, err) } assert.Equal(t, tc.expectedOutput, out.String()) }) } } func newTestConfig() *config.Config { return &config.Config{ CurrentServer: "server1", Servers: map[string]*config.ServerConfig{ "server1": {URL: "https://myserver-1.com"}, "server2": {URL: "https://myserver-2.com"}, "server3": {URL: "https://myserver-3.com"}, }, } } 07070100000026000081A4000000000000000000000001673C868500000D69000000000000000000000000000000000000002300000000rancher-cli-2.10.0/cmd/settings.gopackage cmd import ( managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/sirupsen/logrus" "github.com/urfave/cli" ) type settingHolder struct { ID string Setting managementClient.Setting } func SettingsCommand() cli.Command { return cli.Command{ Name: "settings", Aliases: []string{"setting"}, Usage: "Show settings for the current server", Description: "List get or set settings for the current Rancher server", Action: defaultAction(settingsLs), Flags: []cli.Flag{ formatFlag, }, Subcommands: []cli.Command{ { Name: "ls", Usage: "List settings", Description: "Lists all settings in the current cluster.", ArgsUsage: "[SETTINGNAME]", Action: settingsLs, Flags: []cli.Flag{ formatFlag, quietFlag, }, }, { Name: "get", Usage: "Print a setting", Action: settingGet, Flags: []cli.Flag{ formatFlag, }, }, { Name: "set", Usage: "Set the value for a setting", Action: settingSet, ArgsUsage: "[SETTINGNAME VALUE]", Flags: []cli.Flag{ formatFlag, cli.BoolFlag{ Name: "default", Usage: "Reset the setting back to it's default value. If the default value is (blank) it will be set to that.", }, }, }, }, } } func settingsLs(ctx *cli.Context) error { c, err := GetClient(ctx) if err != nil { return err } settings, err := c.ManagementClient.Setting.List(defaultListOpts(ctx)) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Setting.Name"}, {"VALUE", "Setting.Value"}, }, ctx) defer writer.Close() for _, setting := range settings.Data { writer.Write(&settingHolder{ ID: setting.ID, Setting: setting, }) } return writer.Err() } func settingGet(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowCommandHelp(ctx, "settings") } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), "setting") if err != nil { return err } setting, err := c.ManagementClient.Setting.ByID(resource.ID) if err != nil { return err } writer := NewTableWriter([][]string{ {"ID", "ID"}, {"NAME", "Setting.Name"}, {"VALUE", "Setting.Value"}, {"DEFAULT", "Setting.Default"}, {"CUSTOMIZED", "Setting.Customized"}, }, ctx) defer writer.Close() writer.Write(&settingHolder{ ID: setting.ID, Setting: *setting, }) return writer.Err() } func settingSet(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowCommandHelp(ctx, "settings") } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), "setting") if err != nil { return err } setting, err := c.ManagementClient.Setting.ByID(resource.ID) if err != nil { return err } update := make(map[string]string) if ctx.Bool("default") { update["value"] = setting.Default } else { update["value"] = ctx.Args().Get(1) } updatedSetting, err := c.ManagementClient.Setting.Update(setting, update) if err != nil { return err } var updatedValue string if updatedSetting.Value == "" { updatedValue = "(blank)" } else { updatedValue = updatedSetting.Value } logrus.Infof("Successfully updated setting %s with a new value of: %s", updatedSetting.Name, updatedValue) return nil } 07070100000027000081A4000000000000000000000001673C86850000150D000000000000000000000000000000000000001E00000000rancher-cli-2.10.0/cmd/ssh.gopackage cmd import ( "archive/zip" "bytes" "crypto/tls" "crypto/x509" "encoding/json" "fmt" "io" "net/http" "os" "os/exec" "path" "strings" "github.com/pkg/errors" "github.com/rancher/cli/cliclient" managementClient "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/urfave/cli" ) const sshDescription = ` For any nodes created through Rancher using docker-machine, you can SSH into the node. This is not supported for any custom nodes. Examples: # SSH into a node by ID/name $ rancher ssh nodeFoo # SSH into a node by ID/name using the external IP address $ rancher ssh -e nodeFoo # SSH into a node by name but specify the login name to use $ rancher ssh -l login1 nodeFoo # SSH into a node by specifying login name and node using the @ syntax while adding a command to run $ rancher ssh login1@nodeFoo -- netstat -p tcp ` func SSHCommand() cli.Command { return cli.Command{ Name: "ssh", Usage: "SSH into a node", Description: sshDescription, Action: nodeSSH, ArgsUsage: "[NODE_ID/NODE_NAME]", Flags: []cli.Flag{ cli.BoolFlag{ Name: "external,e", Usage: "Use the external ip address of the node", }, cli.StringFlag{ Name: "login,l", Usage: "The login name", }, }, } } func nodeSSH(ctx *cli.Context) error { args := ctx.Args() if len(args) > 0 && (args[0] == "-h" || args[0] == "--help") { return cli.ShowCommandHelp(ctx, "ssh") } if ctx.NArg() == 0 { return cli.ShowCommandHelp(ctx, "ssh") } user := ctx.String("login") nodeName := ctx.Args().First() if strings.Contains(nodeName, "@") { user = strings.Split(nodeName, "@")[0] nodeName = strings.Split(nodeName, "@")[1] } args = args[1:] c, err := GetClient(ctx) if err != nil { return err } sshNode, key, err := getNodeAndKey(ctx, c, nodeName) if err != nil { return err } if user == "" { user = sshNode.SshUser } ipAddress := sshNode.IPAddress if ctx.Bool("external") { ipAddress = sshNode.ExternalIPAddress } return processExitCode(callSSH(key, ipAddress, user, args)) } func getNodeAndKey(ctx *cli.Context, c *cliclient.MasterClient, nodeName string) (managementClient.Node, []byte, error) { sshNode := managementClient.Node{} resource, err := Lookup(c, nodeName, "node") if err != nil { return sshNode, nil, err } sshNode, err = getNodeByID(ctx, c, resource.ID) if err != nil { return sshNode, nil, err } link := sshNode.Links["nodeConfig"] if link == "" { // Get the machine and use that instead. machine, err := getMachineByNodeName(ctx, c, sshNode.NodeName) if err != nil { return sshNode, nil, fmt.Errorf("failed to find SSH key for node [%s]", nodeName) } link = machine.Links["sshkeys"] } key, sshUser, err := getSSHKey(c, link, getNodeName(sshNode)) if err != nil { return sshNode, nil, err } if sshUser != "" { sshNode.SshUser = sshUser } return sshNode, key, nil } func callSSH(content []byte, ip string, user string, args []string) error { dest := fmt.Sprintf("%s@%s", user, ip) tmpfile, err := os.CreateTemp("", "ssh") if err != nil { return err } defer os.Remove(tmpfile.Name()) if err := os.Chmod(tmpfile.Name(), 0600); err != nil { return err } _, err = tmpfile.Write(content) if err != nil { return err } if err := tmpfile.Close(); err != nil { return err } cmd := exec.Command("ssh", append([]string{"-i", tmpfile.Name(), dest}, args...)...) cmd.Stdout = os.Stdout cmd.Stdin = os.Stdin cmd.Stderr = os.Stderr return cmd.Run() } func getSSHKey(c *cliclient.MasterClient, link, nodeName string) ([]byte, string, error) { if link == "" { return nil, "", fmt.Errorf("failed to find SSH key for %s", nodeName) } req, err := http.NewRequest("GET", link, nil) if err != nil { return nil, "", err } req.SetBasicAuth(c.UserConfig.AccessKey, c.UserConfig.SecretKey) req.Header.Add("Accept-Encoding", "zip") client := &http.Client{} if c.UserConfig.CACerts != "" { roots := x509.NewCertPool() ok := roots.AppendCertsFromPEM([]byte(c.UserConfig.CACerts)) if !ok { return []byte{}, "", err } tr := &http.Transport{ TLSClientConfig: &tls.Config{ RootCAs: roots, }, } client.Transport = tr } resp, err := client.Do(req) if err != nil { return nil, "", err } defer resp.Body.Close() zipFiles, err := io.ReadAll(resp.Body) if err != nil { return nil, "", err } if resp.StatusCode != 200 { return nil, "", fmt.Errorf("%s", zipFiles) } zipReader, err := zip.NewReader(bytes.NewReader(zipFiles), resp.ContentLength) if err != nil { return nil, "", err } var sshKey []byte var sshUser string for _, file := range zipReader.File { if path.Base(file.Name) == "id_rsa" { sshKey, err = readFile(file) if err != nil { return nil, "", err } } else if path.Base(file.Name) == "config.json" { config, err := readFile(file) if err != nil { return nil, "", err } var data map[string]interface{} err = json.Unmarshal(config, &data) if err != nil { return nil, "", err } sshUser, _ = data["SSHUser"].(string) } } if len(sshKey) == 0 { return sshKey, "", errors.New("can't find private key file") } return sshKey, sshUser, nil } func readFile(file *zip.File) ([]byte, error) { r, err := file.Open() if err != nil { return nil, err } defer r.Close() return io.ReadAll(r) } 07070100000028000081A4000000000000000000000001673C868500000393000000000000000000000000000000000000001D00000000rancher-cli-2.10.0/cmd/up.gopackage cmd import ( "os" "github.com/rancher/cli/cliclient" client "github.com/rancher/rancher/pkg/client/generated/management/v3" "github.com/urfave/cli" ) func UpCommand() cli.Command { return cli.Command{ Name: "up", Usage: "apply compose config", Action: defaultAction(apply), Flags: []cli.Flag{ cli.StringFlag{ Name: "file,f", Usage: "The location of compose config file", }, }, } } func apply(ctx *cli.Context) error { cf, err := lookupConfig(ctx) if err != nil { return err } c, err := cliclient.NewManagementClient(cf) if err != nil { return err } filePath := ctx.String("file") compose, err := os.ReadFile(filePath) if err != nil { return err } globalComposeConfig := &client.ComposeConfig{ RancherCompose: string(compose), } if _, err := c.ManagementClient.ComposeConfig.Create(globalComposeConfig); err != nil { return err } return nil } 07070100000029000081A4000000000000000000000001673C86850000029A000000000000000000000000000000000000002200000000rancher-cli-2.10.0/cmd/util_ls.gopackage cmd import ( "github.com/rancher/norman/types" "github.com/urfave/cli" ) func baseListOpts() *types.ListOpts { return &types.ListOpts{ Filters: map[string]interface{}{ "limit": -1, "all": true, }, } } func defaultListOpts(ctx *cli.Context) *types.ListOpts { listOpts := baseListOpts() if ctx != nil && !ctx.Bool("all") { listOpts.Filters["removed_null"] = "1" listOpts.Filters["state_ne"] = []string{ "inactive", "stopped", "removing", } delete(listOpts.Filters, "all") } if ctx != nil && ctx.Bool("system") { delete(listOpts.Filters, "system") } else { listOpts.Filters["system"] = "false" } return listOpts } 0707010000002A000081A4000000000000000000000001673C868500000859000000000000000000000000000000000000001F00000000rancher-cli-2.10.0/cmd/wait.gopackage cmd import ( "fmt" "strings" "time" ntypes "github.com/rancher/norman/types" "github.com/sirupsen/logrus" "github.com/urfave/cli" ) var ( waitTypes = []string{"cluster", "app", "project", "multiClusterApp"} ) func WaitCommand() cli.Command { return cli.Command{ Name: "wait", Usage: "Wait for resources " + strings.Join(waitTypes, ", "), ArgsUsage: "[ID/NAME]", Action: defaultAction(wait), Flags: []cli.Flag{ cli.IntFlag{ Name: "timeout", Usage: "Time in seconds to wait for a resource", Value: 120, }, }, } } func wait(ctx *cli.Context) error { if ctx.NArg() == 0 { return cli.ShowCommandHelp(ctx, "wait") } c, err := GetClient(ctx) if err != nil { return err } resource, err := Lookup(c, ctx.Args().First(), waitTypes...) if err != nil { return err } mapResource := map[string]interface{}{} // Initial check shortcut err = c.ByID(resource, &mapResource) if err != nil { return err } ok, err := checkDone(resource, mapResource) if err != nil { return err } if ok { return nil } timeout := time.After(time.Duration(ctx.Int("timeout")) * time.Second) ticker := time.NewTicker(time.Second) for { select { case <-timeout: return fmt.Errorf("Timeout reached %v:%v transitioningMessage: %v", resource.Type, resource.ID, mapResource["transitioningMessage"]) case <-ticker.C: err = c.ByID(resource, &mapResource) if err != nil { return err } ok, err := checkDone(resource, mapResource) if err != nil { return err } if ok { return nil } } } } func checkDone(resource *ntypes.Resource, data map[string]interface{}) (bool, error) { transitioning := fmt.Sprint(data["transitioning"]) logrus.Debugf("%s:%s transitioning=%s state=%v", resource.Type, resource.ID, transitioning, data["state"]) switch transitioning { case "yes": return false, nil case "error": if data["state"] == "provisioning" { break } return false, fmt.Errorf("%v:%v failed, transitioningMessage: %v", resource.Type, resource.ID, data["transitioningMessage"]) } return data["state"] == "active", nil } 0707010000002B000081A4000000000000000000000001673C868500000916000000000000000000000000000000000000002100000000rancher-cli-2.10.0/cmd/writer.gopackage cmd import ( "encoding/json" "io" "os" "text/tabwriter" "github.com/ghodss/yaml" "github.com/urfave/cli" ) type TableWriter struct { HeaderFormat string ValueFormat string err error headerPrinted bool Writer *tabwriter.Writer } type TableWriterConfig struct { Quiet bool Format string Writer io.Writer } func NewTableWriter(values [][]string, ctx *cli.Context) *TableWriter { cfg := &TableWriterConfig{ Writer: os.Stdout, Quiet: ctx.Bool("quiet"), Format: ctx.String("format"), } return NewTableWriterWithConfig(values, cfg) } func NewTableWriterWithConfig(values [][]string, config *TableWriterConfig) *TableWriter { writer := config.Writer if writer == nil { writer = os.Stdout } t := &TableWriter{ Writer: tabwriter.NewWriter(writer, 10, 1, 3, ' ', 0), } t.HeaderFormat, t.ValueFormat = SimpleFormat(values) // remove headers if quiet or with a different format if config.Quiet || config.Format != "" { t.HeaderFormat = "" } // when quiet show only the ID if config.Quiet { t.ValueFormat = "{{.ID}}\n" } // check for custom formatting if config.Format != "" { customFormat := config.Format // add a newline for other custom formats if customFormat != "json" && customFormat != "yaml" { customFormat += "\n" } t.ValueFormat = customFormat } return t } func (t *TableWriter) Err() error { return t.err } func (t *TableWriter) writeHeader() { if t.HeaderFormat != "" && !t.headerPrinted { t.headerPrinted = true t.err = printTemplate(t.Writer, t.HeaderFormat, struct{}{}) if t.err != nil { return } } } func (t *TableWriter) Write(obj interface{}) { if t.err != nil { return } t.writeHeader() if t.err != nil { return } if t.ValueFormat == "json" { content, err := json.Marshal(obj) t.err = err if t.err != nil { return } _, t.err = t.Writer.Write(append(content, byte('\n'))) } else if t.ValueFormat == "yaml" { content, err := yaml.Marshal(obj) t.err = err if t.err != nil { return } _, t.err = t.Writer.Write(append(content, byte('\n'))) } else { t.err = printTemplate(t.Writer, t.ValueFormat, obj) } } func (t *TableWriter) Close() error { if t.err != nil { return t.err } t.writeHeader() if t.err != nil { return t.err } return t.Writer.Flush() } 0707010000002C000041ED000000000000000000000002673C868500000000000000000000000000000000000000000000001A00000000rancher-cli-2.10.0/config0707010000002D000081A4000000000000000000000001673C868500000EC5000000000000000000000000000000000000002400000000rancher-cli-2.10.0/config/config.gopackage config import ( "encoding/json" "errors" "fmt" "net/url" "os" "path/filepath" "strings" "github.com/sirupsen/logrus" "k8s.io/client-go/tools/clientcmd/api" ) var ErrNoConfigurationFound = errors.New("no configuration found, run `login`") // Config holds the main config for the user type Config struct { Servers map[string]*ServerConfig //Path to the config file Path string `json:"path,omitempty"` // CurrentServer the user has in focus CurrentServer string } // ServerConfig holds the config for each server the user has setup type ServerConfig struct { AccessKey string `json:"accessKey"` SecretKey string `json:"secretKey"` TokenKey string `json:"tokenKey"` URL string `json:"url"` Project string `json:"project"` CACerts string `json:"cacert"` KubeCredentials map[string]*ExecCredential `json:"kubeCredentials"` KubeConfigs map[string]*api.Config `json:"kubeConfigs"` } // LoadFromPath attempts to load a config from the given file path. If the file // doesn't exist, an empty config is returned. func LoadFromPath(path string) (Config, error) { cf := Config{ Path: path, Servers: make(map[string]*ServerConfig), } content, err := os.ReadFile(path) if err != nil { // it's okay if the file is empty, we still return a valid config if os.IsNotExist(err) { return cf, nil } return cf, err } if err := json.Unmarshal(content, &cf); err != nil { return cf, fmt.Errorf("unmarshaling %s: %w", path, err) } cf.Path = path return cf, nil } // GetFilePermissionWarnings returns the following warnings based on the file permission: // - one warning if the file is group-readable // - one warning if the file is world-readable // We want this because configuration may have sensitive information (eg: creds). // A nil error is returned if the file doesn't exist. func GetFilePermissionWarnings(path string) ([]string, error) { info, err := os.Stat(path) if err != nil { if os.IsNotExist(err) { return []string{}, nil } return []string{}, fmt.Errorf("get file info: %w", err) } var warnings []string if info.Mode()&0040 > 0 { warnings = append(warnings, fmt.Sprintf("Rancher configuration file %s is group-readable. This is insecure.", path)) } if info.Mode()&0004 > 0 { warnings = append(warnings, fmt.Sprintf("Rancher configuration file %s is world-readable. This is insecure.", path)) } return warnings, nil } func (c Config) Write() error { err := os.MkdirAll(filepath.Dir(c.Path), 0700) if err != nil { return err } logrus.Infof("Saving config to %s", c.Path) p := c.Path c.Path = "" output, err := os.OpenFile(p, os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600) if err != nil { return err } defer output.Close() return json.NewEncoder(output).Encode(c) } func (c Config) FocusedServer() (*ServerConfig, error) { currentServer, found := c.Servers[c.CurrentServer] if !found || currentServer == nil { return nil, ErrNoConfigurationFound } return currentServer, nil } func (c ServerConfig) FocusedCluster() string { return strings.Split(c.Project, ":")[0] } func (c ServerConfig) KubeToken(key string) *ExecCredential { return c.KubeCredentials[key] } func (c ServerConfig) EnvironmentURL() (string, error) { url, err := baseURL(c.URL) if err != nil { return "", err } return url, nil } func baseURL(fullURL string) (string, error) { idx := strings.LastIndex(fullURL, "/v3") if idx == -1 { u, err := url.Parse(fullURL) if err != nil { return "", err } newURL := url.URL{ Scheme: u.Scheme, Host: u.Host, } return newURL.String(), nil } return fullURL[:idx], nil } 0707010000002E000081A4000000000000000000000001673C86850000127E000000000000000000000000000000000000002900000000rancher-cli-2.10.0/config/config_test.gopackage config import ( "os" "path/filepath" "testing" "github.com/stretchr/testify/assert" ) const ( validFile = ` { "Servers": { "rancherDefault": { "accessKey": "the-access-key", "secretKey": "the-secret-key", "tokenKey": "the-token-key", "url": "https://example.com", "project": "cluster-id:project-id", "cacert": "", "kubeCredentials": null, "kubeConfigs": null } }, "CurrentServer": "rancherDefault" }` invalidFile = `invalid config file` ) func Test_GetFilePermissionWarnings(t *testing.T) { t.Parallel() tests := []struct { name string mode os.FileMode expectedWarnings int }{ { name: "neither group-readable nor world-readable", mode: os.FileMode(0600), expectedWarnings: 0, }, { name: "group-readable and world-readable", mode: os.FileMode(0644), expectedWarnings: 2, }, { name: "group-readable", mode: os.FileMode(0640), expectedWarnings: 1, }, { name: "world-readable", mode: os.FileMode(0604), expectedWarnings: 1, }, } for _, tt := range tests { tt := tt t.Run(tt.name, func(t *testing.T) { t.Parallel() assert := assert.New(t) dir, err := os.MkdirTemp("", "rancher-cli-test-*") assert.NoError(err) defer os.RemoveAll(dir) path := filepath.Join(dir, "cli2.json") err = os.WriteFile(path, []byte(validFile), tt.mode) assert.NoError(err) warnings, err := GetFilePermissionWarnings(path) assert.NoError(err) assert.Len(warnings, tt.expectedWarnings) }) } } func Test_Permission(t *testing.T) { t.Parallel() // New config files should have 0600 permissions t.Run("new config file", func(t *testing.T) { t.Parallel() assert := assert.New(t) dir, err := os.MkdirTemp("", "rancher-cli-test-*") assert.NoError(err) defer os.RemoveAll(dir) path := filepath.Join(dir, "cli2.json") conf, err := LoadFromPath(path) assert.NoError(err) err = conf.Write() assert.NoError(err) info, err := os.Stat(path) assert.NoError(err) assert.Equal(os.FileMode(0600), info.Mode()) // make sure new file doesn't create permission warnings warnings, err := GetFilePermissionWarnings(path) assert.NoError(err) assert.Len(warnings, 0) }) // Already existing config files should keep their current permissions t.Run("existing config file", func(t *testing.T) { t.Parallel() assert := assert.New(t) dir, err := os.MkdirTemp("", "rancher-cli-test-*") assert.NoError(err) defer os.RemoveAll(dir) path := filepath.Join(dir, "cli2.json") err = os.WriteFile(path, []byte(validFile), 0700) assert.NoError(err) conf, err := LoadFromPath(path) assert.NoError(err) err = conf.Write() assert.NoError(err) info, err := os.Stat(path) assert.NoError(err) assert.Equal(os.FileMode(0700), info.Mode()) }) } func Test_LoadFromPath(t *testing.T) { t.Parallel() tests := []struct { name string content string expectedConf Config expectedErr bool }{ { name: "valid config", content: validFile, expectedConf: Config{ Servers: map[string]*ServerConfig{ "rancherDefault": { AccessKey: "the-access-key", SecretKey: "the-secret-key", TokenKey: "the-token-key", URL: "https://example.com", Project: "cluster-id:project-id", CACerts: "", }, }, CurrentServer: "rancherDefault", }, }, { name: "invalid config", content: invalidFile, expectedConf: Config{ Servers: map[string]*ServerConfig{}, }, expectedErr: true, }, { name: "non existing file", content: "", expectedConf: Config{ Servers: map[string]*ServerConfig{}, CurrentServer: "", }, }, } for _, tt := range tests { tt := tt t.Run(tt.name, func(t *testing.T) { t.Parallel() assert := assert.New(t) dir, err := os.MkdirTemp("", "rancher-cli-test-*") assert.NoError(err) defer os.RemoveAll(dir) path := filepath.Join(dir, "cli2.json") // make sure the path points to the temp dir created in the test tt.expectedConf.Path = path if tt.content != "" { err = os.WriteFile(path, []byte(tt.content), 0600) assert.NoError(err) } conf, err := LoadFromPath(path) if tt.expectedErr { assert.Error(err) // We kept the old behavior of returning a valid config even in // case of an error so we assert it here. If you change this // behavior, make sure there aren't any regressions. assert.Equal(tt.expectedConf, conf) return } assert.NoError(err) assert.Equal(tt.expectedConf, conf) }) } } 0707010000002F000081A4000000000000000000000001673C868500000B38000000000000000000000000000000000000002900000000rancher-cli-2.10.0/config/kube_config.gopackage config import "time" // ExecCredential is used by exec-based plugins to communicate credentials to // HTTP transports. //v1beta1/types.go type ExecCredential struct { TypeMeta `json:",inline"` // Spec holds information passed to the plugin by the transport. This contains // request and runtime specific information, such as if the session is interactive. Spec ExecCredentialSpec `json:"spec,omitempty"` // Status is filled in by the plugin and holds the credentials that the transport // should use to contact the API. // +optional Status *ExecCredentialStatus `json:"status,omitempty"` } // ExecCredentialSpec holds request and runtime specific information provided by // the transport. type ExecCredentialSpec struct{} // ExecCredentialStatus holds credentials for the transport to use. // Token and ClientKeyData are sensitive fields. This data should only be // transmitted in-memory between client and exec plugin process. Exec plugin // itself should at least be protected via file permissions. type ExecCredentialStatus struct { // ExpirationTimestamp indicates a time when the provided credentials expire. // +optional ExpirationTimestamp *Time `json:"expirationTimestamp,omitempty"` // Token is a bearer token used by the client for request authentication. Token string `json:"token,omitempty"` // PEM-encoded client TLS certificates (including intermediates, if any). ClientCertificateData string `json:"clientCertificateData,omitempty"` // PEM-encoded private key for the above certificate. ClientKeyData string `json:"clientKeyData,omitempty"` } // TypeMeta describes an individual object in an API response or request // with strings representing the type of the object and its API schema version. // Structures that are versioned or persisted should inline TypeMeta. type TypeMeta struct { // Kind is a string value representing the REST resource this object represents. // Servers may infer this from the endpoint the client submits requests to. // Cannot be updated. // In CamelCase. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds // +optional Kind string `json:"kind,omitempty" protobuf:"bytes,1,opt,name=kind"` // APIVersion defines the versioned schema of this representation of an object. // Servers should convert recognized schemas to the latest internal value, and // may reject unrecognized values. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources // +optional APIVersion string `json:"apiVersion,omitempty" protobuf:"bytes,2,opt,name=apiVersion"` } // Time is a wrapper around time.Time which supports correct // marshaling to YAML and JSON. Wrappers are provided for many // of the factory methods that the time package offers. type Time struct { time.Time `protobuf:"-"` } 07070100000030000041ED000000000000000000000002673C868500000000000000000000000000000000000000000000001B00000000rancher-cli-2.10.0/contrib07070100000031000081ED000000000000000000000001673C8685000001C9000000000000000000000000000000000000002300000000rancher-cli-2.10.0/contrib/rancher#!/bin/bash [[ -d ~/.rancher ]] || mkdir -p ~/.rancher [[ -d ~/.ssh ]] || mkdir -p ~/.ssh [[ -e ~/.ssh/known_hosts ]] || touch ~/.ssh/known_hosts [[ -e ~/.rancher/cli.json ]] || echo "{"accessKey":"","secretKey":"","url":"","environment":""}" > ~/.rancher/cli.json IMAGE=${IMAGE:-rancher/cli} exec docker run --rm -it --net host -v ~/.rancher/cli.json:/root/.rancher/cli.json -v ~/.ssh/known_hosts:/root/.ssh/known_hosts -v $(pwd):/mnt ${IMAGE} "$@" 07070100000032000081A4000000000000000000000001673C868500000FB8000000000000000000000000000000000000001A00000000rancher-cli-2.10.0/go.modmodule github.com/rancher/cli go 1.23.0 toolchain go1.23.1 replace k8s.io/client-go => k8s.io/client-go v0.31.1 require ( github.com/ghodss/yaml v1.0.0 github.com/grantae/certinfo v0.0.0-20170412194111-59d56a35515b github.com/hashicorp/go-version v1.2.1 github.com/pkg/errors v0.9.1 github.com/rancher/norman v0.0.0-20241001183610-78a520c160ab github.com/rancher/rancher/pkg/apis v0.0.0-20241119020906-df45e368c82d github.com/rancher/rancher/pkg/client v0.0.0-20241119020906-df45e368c82d github.com/sirupsen/logrus v1.9.3 github.com/stretchr/testify v1.9.0 github.com/tidwall/gjson v1.17.0 github.com/urfave/cli v1.22.5 golang.org/x/exp v0.0.0-20240213143201-ec583247a57a golang.org/x/oauth2 v0.23.0 golang.org/x/sync v0.8.0 golang.org/x/term v0.25.0 golang.org/x/text v0.19.0 gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c gopkg.in/yaml.v2 v2.4.0 k8s.io/client-go v12.0.0+incompatible ) require ( github.com/beorn7/perks v1.0.1 // indirect github.com/blang/semver/v4 v4.0.0 // indirect github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/cpuguy83/go-md2man/v2 v2.0.4 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/emicklei/go-restful/v3 v3.12.1 // indirect github.com/fxamacker/cbor/v2 v2.7.0 // indirect github.com/go-logr/logr v1.4.2 // indirect github.com/go-openapi/jsonpointer v0.21.0 // indirect github.com/go-openapi/jsonreference v0.20.2 // indirect github.com/go-openapi/swag v0.23.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang/protobuf v1.5.4 // indirect github.com/google/gnostic-models v0.6.8 // indirect github.com/google/go-cmp v0.6.0 // indirect github.com/google/gofuzz v1.2.0 // indirect github.com/google/uuid v1.6.0 // indirect github.com/gorilla/websocket v1.5.3 // indirect github.com/imdario/mergo v0.3.16 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect github.com/kr/pretty v0.3.1 // indirect github.com/kr/text v0.2.0 // indirect github.com/mailru/easyjson v0.7.7 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.2 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect github.com/prometheus/client_golang v1.19.1 // indirect github.com/prometheus/client_model v0.6.1 // indirect github.com/prometheus/common v0.55.0 // indirect github.com/prometheus/procfs v0.15.1 // indirect github.com/rancher/aks-operator v1.10.0 // indirect github.com/rancher/eks-operator v1.10.0 // indirect github.com/rancher/fleet/pkg/apis v0.11.0 // indirect github.com/rancher/gke-operator v1.10.0 // indirect github.com/rancher/lasso v0.0.0-20240924233157-8f384efc8813 // indirect github.com/rancher/rke v1.7.0 // indirect github.com/rancher/wrangler/v3 v3.1.0 // indirect github.com/rogpeppe/go-internal v1.12.0 // indirect github.com/russross/blackfriday/v2 v2.1.0 // indirect github.com/spf13/pflag v1.0.5 // indirect github.com/tidwall/match v1.1.1 // indirect github.com/tidwall/pretty v1.2.0 // indirect github.com/x448/float16 v0.8.4 // indirect golang.org/x/net v0.30.0 // indirect golang.org/x/sys v0.26.0 // indirect golang.org/x/time v0.7.0 // indirect google.golang.org/protobuf v1.35.1 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect k8s.io/api v0.31.1 // indirect k8s.io/apimachinery v0.31.1 // indirect k8s.io/apiserver v0.31.1 // indirect k8s.io/component-base v0.31.1 // indirect k8s.io/klog/v2 v2.130.1 // indirect k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 // indirect k8s.io/kubernetes v1.31.1 // indirect k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 // indirect sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect sigs.k8s.io/structured-merge-diff/v4 v4.4.1 // indirect sigs.k8s.io/yaml v1.4.0 // indirect ) 07070100000033000081A4000000000000000000000001673C868500005401000000000000000000000000000000000000001A00000000rancher-cli-2.10.0/go.sumgithub.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM= github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ= github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU= github.com/cpuguy83/go-md2man/v2 v2.0.4 h1:wfIWP927BUkWJb2NmU/kNDYIBTh/ziUX91+lVfRxZq4= github.com/cpuguy83/go-md2man/v2 v2.0.4/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM= github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/emicklei/go-restful/v3 v3.12.1 h1:PJMDIM/ak7btuL8Ex0iYET9hxM3CI2sjZtzpL63nKAU= github.com/emicklei/go-restful/v3 v3.12.1/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs= github.com/go-openapi/jsonpointer v0.21.0 h1:YgdVicSA9vH5RiHs9TZW5oyafXZFc6+2Vc1rr/O9oNQ= github.com/go-openapi/jsonpointer v0.21.0/go.mod h1:IUyH9l/+uyhIYQ/PXVA41Rexl+kOkAPDdXEYns6fzUY= github.com/go-openapi/jsonreference v0.20.2 h1:3sVjiK66+uXK/6oQ8xgcRKcFgQ5KXa2KvnJRumpMGbE= github.com/go-openapi/jsonreference v0.20.2/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k= github.com/go-openapi/swag v0.22.3/go.mod h1:UzaqsxGiab7freDnrUUra0MwWfN/q7tE4j+VcZ0yl14= github.com/go-openapi/swag v0.23.0 h1:vsEVJDUo2hPJ2tu0/Xc+4noaxyEffXNIs3cOULZ+GrE= github.com/go-openapi/swag v0.23.0/go.mod h1:esZ8ITTYEsH1V2trKHjAN8Ai7xHb8RV+YSZ577vPjgQ= github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/pprof v0.0.0-20240827171923-fa2c70bbbfe5 h1:5iH8iuqE5apketRbSFBy+X1V0o+l+8NF1avt4HWl7cA= github.com/google/pprof v0.0.0-20240827171923-fa2c70bbbfe5/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg= github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/grantae/certinfo v0.0.0-20170412194111-59d56a35515b h1:NGgE5ELokSf2tZ/bydyDUKrvd/jP8lrAoPNeBuMOTOk= github.com/grantae/certinfo v0.0.0-20170412194111-59d56a35515b/go.mod h1:zT/uzhdQGTqlwTq7Lpbj3JoJQWfPfIJ1tE0OidAmih8= github.com/hashicorp/go-version v1.2.1 h1:zEfKbn2+PDgroKdiOzqiE8rsmLqU2uwi5PB5pBJ3TkI= github.com/hashicorp/go-version v1.2.1/go.mod h1:fltr4n8CU8Ke44wwGCBoEymUuxUHl09ZGVZPK5anwXA= github.com/imdario/mergo v0.3.16 h1:wwQJbIsHYGMUyLSPrEq1CT16AhnhNJQ51+4fdHUnCl4= github.com/imdario/mergo v0.3.16/go.mod h1:WBLT9ZmE3lPoWsEzCh9LPo3TiwVN+ZKEjmz+hD27ysY= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0= github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc= github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/onsi/ginkgo/v2 v2.20.2 h1:7NVCeyIWROIAheY21RLS+3j2bb52W0W82tkberYytp4= github.com/onsi/ginkgo/v2 v2.20.2/go.mod h1:K9gyxPIlb+aIvnZ8bd9Ak+YP18w3APlR+5coaZoE2ag= github.com/onsi/gomega v1.34.2 h1:pNCwDkzrsv7MS9kpaQvVb1aVLahQXyJ/Tv5oAZMI3i8= github.com/onsi/gomega v1.34.2/go.mod h1:v1xfxRgk0KIsG+QOdm7p8UosrOzPYRo60fd3B/1Dukc= github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE= github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho= github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc= github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8= github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= github.com/rancher/aks-operator v1.10.0 h1:9PGJUyzso2Tg9o64sYI6++mCke9ToRchvN5uZqPV+kY= github.com/rancher/aks-operator v1.10.0/go.mod h1:n7CBXwN5mpJZT7/3PYg6cWBAVCqjayhaUiRtTCH1FMQ= github.com/rancher/eks-operator v1.10.0 h1:a3l3nmoIf5EiYS4BQ+a9Z8+0WwZ3duek6gnrT6VZKwk= github.com/rancher/eks-operator v1.10.0/go.mod h1:coW31jIfImAHdGsepc7yCXSuixdclQkJn3y26E9tsss= github.com/rancher/fleet/pkg/apis v0.11.0 h1:4OjUfgGdGMQUOHDI8HWN79N9P4U5g9XiPCCbrkZVOMo= github.com/rancher/fleet/pkg/apis v0.11.0/go.mod h1:8nvuO8x0z7ydpW0eZJEEEPHI0Bmb9T5L3igH0t+0dDk= github.com/rancher/gke-operator v1.10.0 h1:vV9jLErnH5VRBpK/kCzem8T7/yEDqLVXIcv20Or7e7I= github.com/rancher/gke-operator v1.10.0/go.mod h1:k3oIJMCilpaLHeHPRy90S3pfZ05vbe+b+g1ISiHQbLo= github.com/rancher/lasso v0.0.0-20240924233157-8f384efc8813 h1:V/LY8pUHZG9Kc+xEDWDOryOnCU6/Q+Lsr9QQEQnshpU= github.com/rancher/lasso v0.0.0-20240924233157-8f384efc8813/go.mod h1:IxgTBO55lziYhTEETyVKiT8/B5Rg92qYiRmcIIYoPgI= github.com/rancher/norman v0.0.0-20241001183610-78a520c160ab h1:ihK6See3y/JilqZlc0CG7NXPN+ue5nY9U7xUZUA8M7I= github.com/rancher/norman v0.0.0-20241001183610-78a520c160ab/go.mod h1:qX/OG/4wY27xSAcSdRilUBxBumV6Ey2CWpAeaKnBQDs= github.com/rancher/rancher/pkg/apis v0.0.0-20241119020906-df45e368c82d h1:eiUEBkdnLLR1+e0JBJiLT95xYouFFisWqDlRp/+3P2A= github.com/rancher/rancher/pkg/apis v0.0.0-20241119020906-df45e368c82d/go.mod h1:vm4Y3LVgGn4bWOw7pNTYnqvJhrWM7l0FeGPs2s9QiTA= github.com/rancher/rancher/pkg/client v0.0.0-20241119020906-df45e368c82d h1:nY18eCkit/7gM27W/of0hcc9FSpq0x0I0pJ5fCkC72I= github.com/rancher/rancher/pkg/client v0.0.0-20241119020906-df45e368c82d/go.mod h1:rYJRcRhgLLnCFAlomfhBN5uZLR5qA2v0Hd9xSe+qXZA= github.com/rancher/rke v1.7.0 h1:UFQOh/y1TXsWbbeNR3r8mDxGm9WYHyb6+F8u7rIKNL0= github.com/rancher/rke v1.7.0/go.mod h1:+x++Mvl0A3jIzNLiu8nkraqZXiHg6VPWv0Xl4iQCg+A= github.com/rancher/wrangler/v3 v3.1.0 h1:8ETBnQOEcZaR6WBmUSysWW7WnERBOiNTMJr4Dj3UG/s= github.com/rancher/wrangler/v3 v3.1.0/go.mod h1:gUPHS1ANs2NyByfeERHwkGiQ1rlIa8BpTJZtNSgMlZw= github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs= github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8= github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4= github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk= github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM= github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc= github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ= github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ= github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA= github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg= github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/tidwall/gjson v1.17.0 h1:/Jocvlh98kcTfpN2+JzGQWQcqrPQwDrVEMApx/M5ZwM= github.com/tidwall/gjson v1.17.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk= github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA= github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM= github.com/tidwall/pretty v1.2.0 h1:RWIZEg2iJ8/g6fDDYzMpobmaoGh5OLl4AXtGUGPcqCs= github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU= github.com/urfave/cli v1.22.5 h1:lNq9sAHXK2qfdI8W+GRItjCEkI+2oR4d+MEHy1CKXoU= github.com/urfave/cli v1.22.5/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0= github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= go.uber.org/mock v0.5.0 h1:KAMbZvZPyBPWgD14IrIQ38QCyjwpvVVV6K/bHl1IwQU= go.uber.org/mock v0.5.0/go.mod h1:ge71pBPLYDk7QIi1LupWxdAykm7KIEFchiOqd6z7qMM= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/exp v0.0.0-20240213143201-ec583247a57a h1:HinSgX1tJRX3KsL//Gxynpw5CTOAIPhgL4W8PNiIpVE= golang.org/x/exp v0.0.0-20240213143201-ec583247a57a/go.mod h1:CxmFvTBINI24O/j8iY7H1xHzx2i4OsyguNBmN/uPtqc= golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.30.0 h1:AcW1SDZMkb8IpzCdQUaIq2sP4sZ4zw+55h6ynffypl4= golang.org/x/net v0.30.0/go.mod h1:2wGyMJ5iFasEhkwi13ChkO/t1ECNC4X4eBKkVFyYFlU= golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs= golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.8.0 h1:3NFvSEYkUoMifnESzZl15y791HH1qU2xm6eCJU5ZPXQ= golang.org/x/sync v0.8.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.26.0 h1:KHjCJyddX0LoSTb3J+vWpupP9p0oznkqVk/IfjymZbo= golang.org/x/sys v0.26.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.25.0 h1:WtHI/ltw4NvSUig5KARz9h521QvRC8RmF/cuYqifU24= golang.org/x/term v0.25.0/go.mod h1:RPyXicDX+6vLxogjjRxjgD2TKtmAO6NZBsBRfrOLu7M= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.19.0 h1:kTxAhCbGbxhK0IwgSKiMO5awPoDQ0RpfiVYBfK860YM= golang.org/x/text v0.19.0/go.mod h1:BuEKDfySbSR4drPmRPG/7iBdf8hvFMuRexcpahXilzY= golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ= golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= golang.org/x/tools v0.24.0 h1:J1shsA93PJUEVaUSaay7UXAyE8aimq3GW0pjlolpa24= golang.org/x/tools v0.24.0/go.mod h1:YhNqVBIfWHdzvTLs0d8LCuMhkKUgSUKldakyV7W/WDQ= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA= google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4= gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= k8s.io/api v0.31.1 h1:Xe1hX/fPW3PXYYv8BlozYqw63ytA92snr96zMW9gWTU= k8s.io/api v0.31.1/go.mod h1:sbN1g6eY6XVLeqNsZGLnI5FwVseTrZX7Fv3O26rhAaI= k8s.io/apimachinery v0.31.1 h1:mhcUBbj7KUjaVhyXILglcVjuS4nYXiwC+KKFBgIVy7U= k8s.io/apimachinery v0.31.1/go.mod h1:rsPdaZJfTfLsNJSQzNHQvYoTmxhoOEofxtOsF3rtsMo= k8s.io/apiserver v0.31.1 h1:Sars5ejQDCRBY5f7R3QFHdqN3s61nhkpaX8/k1iEw1c= k8s.io/apiserver v0.31.1/go.mod h1:lzDhpeToamVZJmmFlaLwdYZwd7zB+WYRYIboqA1kGxM= k8s.io/client-go v0.31.1 h1:f0ugtWSbWpxHR7sjVpQwuvw9a3ZKLXX0u0itkFXufb0= k8s.io/client-go v0.31.1/go.mod h1:sKI8871MJN2OyeqRlmA4W4KM9KBdBUpDLu/43eGemCg= k8s.io/component-base v0.31.1 h1:UpOepcrX3rQ3ab5NB6g5iP0tvsgJWzxTyAo20sgYSy8= k8s.io/component-base v0.31.1/go.mod h1:WGeaw7t/kTsqpVTaCoVEtillbqAhF2/JgvO0LDOMa0w= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340 h1:BZqlfIlq5YbRMFko6/PM7FjZpUb45WallggurYhKGag= k8s.io/kube-openapi v0.0.0-20240228011516-70dd3763d340/go.mod h1:yD4MZYeKMBwQKVht279WycxKyM84kkAx2DPrTXaeb98= k8s.io/kubernetes v1.31.1 h1:1fcYJe8SAhtannpChbmnzHLwAV9Je99PrGaFtBvCxms= k8s.io/kubernetes v1.31.1/go.mod h1:/YGPL//Fb9mdv5vukvAQ7Xon+Bqwry52bmjTdORAw+Q= k8s.io/utils v0.0.0-20240711033017-18e509b52bc8 h1:pUdcCO1Lk/tbT5ztQWOBi5HBgbBP1J8+AsQnQCKsi8A= k8s.io/utils v0.0.0-20240711033017-18e509b52bc8/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo= sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0= sigs.k8s.io/structured-merge-diff/v4 v4.4.1 h1:150L+0vs/8DA78h1u02ooW1/fFq/Lwr+sGiqlzvrtq4= sigs.k8s.io/structured-merge-diff/v4 v4.4.1/go.mod h1:N8hJocpFajUSSeSJ9bOZ77VzejKZaXsTtZo4/u7Io08= sigs.k8s.io/yaml v1.4.0 h1:Mk1wCc2gy/F0THH0TAp1QYyJNzRm2KCLy3o5ASXVI5E= sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= 07070100000034000081A4000000000000000000000001673C868500000F36000000000000000000000000000000000000001B00000000rancher-cli-2.10.0/main.gopackage main import ( "os" "regexp" "strings" "github.com/pkg/errors" "github.com/rancher/cli/cmd" "github.com/rancher/cli/config" "github.com/sirupsen/logrus" "github.com/urfave/cli" ) var VERSION = "dev" var AppHelpTemplate = `{{.Usage}} Usage: {{.Name}} {{if .Flags}}[OPTIONS] {{end}}COMMAND [arg...] Version: {{.Version}} {{if .Flags}} Options: {{range .Flags}}{{if .Hidden}}{{else}}{{.}} {{end}}{{end}}{{end}} Commands: {{range .Commands}}{{.Name}}{{with .Aliases}}, {{.}}{{end}}{{ "\t" }}{{.Usage}} {{end}} Run '{{.Name}} COMMAND --help' for more information on a command. ` var CommandHelpTemplate = `{{.Usage}} {{if .Description}}{{.Description}}{{end}} Usage: {{.HelpName}} {{if .Flags}}[OPTIONS] {{end}}{{if ne "None" .ArgsUsage}}{{if ne "" .ArgsUsage}}{{.ArgsUsage}}{{else}}[arg...]{{end}}{{end}} {{if .Flags}}Options:{{range .Flags}} {{.}}{{end}}{{end}} ` var SubcommandHelpTemplate = `{{.Usage}} {{if .Description}}{{.Description}}{{end}} Usage: {{.HelpName}} command{{if .VisibleFlags}} [command options]{{end}} {{if .ArgsUsage}}{{.ArgsUsage}}{{else}}[arguments...]{{end}} Commands:{{range .VisibleCategories}}{{if .Name}} {{.Name}}:{{end}}{{range .VisibleCommands}} {{join .Names ", "}}{{"\t"}}{{.Usage}}{{end}} {{end}}{{if .VisibleFlags}} Options: {{range .VisibleFlags}}{{.}} {{end}}{{end}} ` func main() { if err := mainErr(); err != nil { logrus.Fatal(err) } } func mainErr() error { cli.AppHelpTemplate = AppHelpTemplate cli.CommandHelpTemplate = CommandHelpTemplate cli.SubcommandHelpTemplate = SubcommandHelpTemplate app := cli.NewApp() app.Name = "rancher" app.Usage = "Rancher CLI, managing containers one UTF-8 character at a time" app.Before = func(ctx *cli.Context) error { if ctx.GlobalBool("debug") { logrus.SetLevel(logrus.DebugLevel) } path := cmd.GetConfigPath(ctx) warnings, err := config.GetFilePermissionWarnings(path) if err != nil { // We don't want to block the execution of the CLI in that case logrus.Errorf("Unable to verify config file permission: %s. Continuing.", err) } for _, warning := range warnings { logrus.Warning(warning) } return nil } app.Version = VERSION app.Author = "Rancher Labs, Inc." app.Email = "" configDir, err := cmd.ConfigDir() if err != nil { return err } app.Flags = []cli.Flag{ cli.BoolFlag{ Name: "debug", Usage: "Debug logging", }, cli.StringFlag{ Name: "config, c", Usage: "Path to rancher config", EnvVar: "RANCHER_CONFIG_DIR", Value: configDir, }, } app.Commands = []cli.Command{ cmd.AppCommand(), cmd.CatalogCommand(), cmd.ClusterCommand(), cmd.ContextCommand(), cmd.InspectCommand(), cmd.KubectlCommand(), cmd.LoginCommand(), cmd.MachineCommand(), cmd.MultiClusterAppCommand(), cmd.NamespaceCommand(), cmd.NodeCommand(), cmd.ProjectCommand(), cmd.PsCommand(), cmd.ServerCommand(), cmd.SettingsCommand(), cmd.SSHCommand(), cmd.UpCommand(), cmd.WaitCommand(), cmd.CredentialCommand(), } parsed, err := parseArgs(os.Args) if err != nil { logrus.Error(err) os.Exit(1) } return app.Run(parsed) } var singleAlphaLetterRegxp = regexp.MustCompile("[a-zA-Z]") func parseArgs(args []string) ([]string, error) { result := []string{} for _, arg := range args { if strings.HasPrefix(arg, "-") && !strings.HasPrefix(arg, "--") && len(arg) > 1 { for i, c := range arg[1:] { if string(c) == "=" { if i < 1 { return nil, errors.New("invalid input with '-' and '=' flag") } result[len(result)-1] = result[len(result)-1] + arg[i+1:] break } else if singleAlphaLetterRegxp.MatchString(string(c)) { result = append(result, "-"+string(c)) } else { return nil, errors.Errorf("invalid input %v in flag", string(c)) } } } else { result = append(result, arg) } } return result, nil } 07070100000035000081A4000000000000000000000001673C86850000055B000000000000000000000000000000000000002000000000rancher-cli-2.10.0/main_test.gopackage main import ( "testing" "gopkg.in/check.v1" ) // Hook up gocheck into the "go test" runner. func Test(t *testing.T) { check.TestingT(t) } type MainTestSuite struct { } var _ = check.Suite(&MainTestSuite{}) func (m *MainTestSuite) SetUpSuite(c *check.C) { } func (m *MainTestSuite) TestParseArgs(c *check.C) { input := [][]string{ {"rancher", "run", "--debug", "-itd"}, {"rancher", "run", "--debug", "-itf=b"}, {"rancher", "run", "--debug", "-itd#"}, {"rancher", "run", "--debug", "-f=b"}, {"rancher", "run", "--debug", "-=b"}, {"rancher", "run", "--debug", "-"}, } r0, err := parseArgs(input[0]) if err != nil { c.Fatal(err) } c.Assert(r0, check.DeepEquals, []string{"rancher", "run", "--debug", "-i", "-t", "-d"}) r1, err := parseArgs(input[1]) if err != nil { c.Fatal(err) } c.Assert(r1, check.DeepEquals, []string{"rancher", "run", "--debug", "-i", "-t", "-f=b"}) _, err = parseArgs(input[2]) if err == nil { c.Fatal("should raise error") } r3, err := parseArgs(input[3]) if err != nil { c.Fatal(err) } c.Assert(r3, check.DeepEquals, []string{"rancher", "run", "--debug", "-f=b"}) _, err = parseArgs(input[4]) if err == nil { c.Fatal("should raise error") } r5, err := parseArgs(input[5]) if err != nil { c.Fatal(err) } c.Assert(r5, check.DeepEquals, []string{"rancher", "run", "--debug", "-"}) } 07070100000036000041ED000000000000000000000002673C868500000000000000000000000000000000000000000000001B00000000rancher-cli-2.10.0/package07070100000037000081A4000000000000000000000001673C868500000214000000000000000000000000000000000000002600000000rancher-cli-2.10.0/package/DockerfileFROM registry.suse.com/bci/bci-base:15.6 ARG user=cli RUN zypper -n update && \ zypper -n install ca-certificates openssh-clients && \ zypper clean -a && rm -rf /tmp/* /var/tmp/* /usr/share/doc/packages/* /usr/share/doc/manual/* /var/log/* RUN echo "$user:x:1000:1000::/home/$user:/bin/bash" >> /etc/passwd && \ echo "$user:x:1000:" >> /etc/group && \ mkdir /home/$user && \ chown -R $user:$user /home/$user COPY rancher /usr/bin/ WORKDIR /home/$user USER 1000:1000 ENTRYPOINT ["rancher"] CMD ["--help"] 07070100000038000041ED000000000000000000000002673C868500000000000000000000000000000000000000000000001B00000000rancher-cli-2.10.0/scripts07070100000039000081ED000000000000000000000001673C868500000373000000000000000000000000000000000000002100000000rancher-cli-2.10.0/scripts/build#!/bin/bash -e source $(dirname $0)/version cd $(dirname $0)/.. declare -A OS_ARCH_ARG OS_PLATFORM_ARG=(linux windows darwin) OS_ARCH_ARG[linux]="amd64 arm s390x" OS_ARCH_ARG[windows]="386 amd64" OS_ARCH_ARG[darwin]="amd64 arm64" CGO_ENABLED=0 go build -ldflags="-w -s -X main.VERSION=$VERSION -extldflags -static" -o bin/rancher if [ -n "$CROSS" ]; then rm -rf build/bin mkdir -p build/bin for OS in ${OS_PLATFORM_ARG[@]}; do for ARCH in ${OS_ARCH_ARG[${OS}]}; do OUTPUT_BIN="build/bin/rancher_$OS-$ARCH" if test "$OS" = "windows"; then OUTPUT_BIN="${OUTPUT_BIN}.exe" fi echo "Building binary for $OS/$ARCH..." GOARCH=$ARCH GOOS=$OS CGO_ENABLED=0 go build \ -ldflags="-w -X main.VERSION=$VERSION" \ -o ${OUTPUT_BIN} ./ done done fi 0707010000003A000081ED000000000000000000000001673C868500000051000000000000000000000000000000000000001E00000000rancher-cli-2.10.0/scripts/ci#!/bin/bash set -e cd $(dirname $0) ./build ./test ./lint ./validate ./package 0707010000003B000081ED000000000000000000000001673C868500000057000000000000000000000000000000000000002000000000rancher-cli-2.10.0/scripts/lint#!/bin/bash set -e cd $(dirname $0)/.. echo Running: golangci-lint golangci-lint run 0707010000003C000081ED000000000000000000000001673C868500000777000000000000000000000000000000000000002300000000rancher-cli-2.10.0/scripts/package#!/bin/bash set -e source $(dirname $0)/version cd $(dirname $0)/.. DIST=$(pwd)/dist/artifacts mkdir -p $DIST/${VERSION} $DIST/latest for i in build/bin/*; do if [ ! -e $i ]; then continue fi BASE=build/archive DIR=${BASE}/rancher-${VERSION} rm -rf $BASE mkdir -p $BASE $DIR EXT= if [[ $i =~ .*windows.* ]]; then EXT=.exe fi cp $i ${DIR}/rancher${EXT} arch=$(echo $i | cut -f2 -d_) mkdir -p $DIST/${VERSION}/binaries/$arch mkdir -p $DIST/latest/binaries/$arch cp $i $DIST/${VERSION}/binaries/$arch/rancher${EXT} if [ -z "${EXT}" ]; then gzip -c $i > $DIST/${VERSION}/binaries/$arch/rancher.gz xz -c $i > $DIST/${VERSION}/binaries/$arch/rancher.xz fi rm -rf $DIST/latest/binaries/$arch mkdir -p $DIST/latest/binaries cp -rf $DIST/${VERSION}/binaries/$arch $DIST/latest/binaries ( cd $BASE NAME=$(basename $i | sed 's/_/-/g') if [ -z "$EXT" ]; then tar cvzf $DIST/${VERSION}/${NAME}-${VERSION}.tar.gz . cp $DIST/${VERSION}/${NAME}-${VERSION}.tar.gz $DIST/latest/${NAME}.tar.gz tar cvJf $DIST/${VERSION}/${NAME}-${VERSION}.tar.xz . cp $DIST/${VERSION}/${NAME}-${VERSION}.tar.xz $DIST/latest/${NAME}.tar.xz else NAME=$(echo $NAME | sed 's/'${EXT}'//g') zip -r $DIST/${VERSION}/${NAME}-${VERSION}.zip * cp $DIST/${VERSION}/${NAME}-${VERSION}.zip $DIST/latest/${NAME}.zip fi ) done ARCH=${ARCH:-"amd64"} SUFFIX="" [ "${ARCH}" != "amd64" ] && SUFFIX="_${ARCH}" cd package TAG=${TAG:-${VERSION}${SUFFIX}} REPO=${REPO:-rancher} if echo $TAG | grep -q dirty; then TAG=dev fi if [ -n "$GITHUB_TAG" ]; then TAG=$GITHUB_TAG fi cp ../bin/rancher . docker build -t ${REPO}/cli:${TAG} . echo ${REPO}/cli:${TAG} > ../dist/images echo Built ${REPO}/cli:${TAG} 0707010000003D000081ED000000000000000000000001673C868500000023000000000000000000000000000000000000002300000000rancher-cli-2.10.0/scripts/release#!/bin/bash exec $(dirname $0)/ci 0707010000003E000081ED000000000000000000000001673C868500000063000000000000000000000000000000000000002000000000rancher-cli-2.10.0/scripts/test#!/bin/bash set -e cd $(dirname $0)/.. echo Running tests go test -race -cover -tags=test ./... 0707010000003F000081ED000000000000000000000001673C8685000000E1000000000000000000000000000000000000002400000000rancher-cli-2.10.0/scripts/validate#!/bin/bash set -e cd $(dirname $0)/.. echo Tidying up modules go mod tidy echo Verifying modules go mod verify if [ -n "$(git status --porcelain --untracked-files=no)" ]; then echo "Encountered dirty repo!" exit 1 fi07070100000040000081ED000000000000000000000001673C868500000135000000000000000000000000000000000000002300000000rancher-cli-2.10.0/scripts/version#!/bin/bash if [ -n "$(git status --porcelain --untracked-files=no)" ]; then DIRTY="-dirty" fi COMMIT=$(git rev-parse --short HEAD) GIT_TAG=${GITHUB_TAG:-$(git tag -l --contains HEAD | head -n 1)} if [[ -z "$DIRTY" && -n "$GIT_TAG" ]]; then VERSION=$GIT_TAG else VERSION="${COMMIT}${DIRTY}" fi 07070100000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000B00000000TRAILER!!!575 blocks
Locations
Projects
Search
Status Monitor
Help
OpenBuildService.org
Documentation
API Documentation
Code of Conduct
Contact
Support
@OBShq
Terms
openSUSE Build Service is sponsored by
The Open Build Service is an
openSUSE project
.
Sign Up
Log In
Places
Places
All Projects
Status Monitor