May 30

End-To-End Testing

Updated 6/8/2017

I had been trying to follow the community page on end-to-end testing, but striking out. I gave it a try on native Mac (specifying KUBERNETES_PROVIDER=vagrant), on bare-metal, and inside a Virtual box VM running on a Mac. Each gave me different problems, which I’ll spare elaborating on in this blog. Instead, I’ll cut to the chase and describe what works…

One of the Kubernetes Developers (@ncdc) was kind enough to give me info on a working method, which I’ll elaborate on here.

Preparations

First, though, here is what I have for a setup:

  • Mac host (shouldn’t matter).
  • CentOS 7  Vagrant box with 40GB drive, running via VirtualBox.
  • VM has 8GB RAM and 2 CPUs configured.
  • Go 1.8.1 installed and $GOPATH setup. Created ~/go/src/k8s.io as a local work area.
  • Tools installed: git, docker, emacs (or your favorite editor).
  • Pull of Kubernetes repo from the work area (latest try used commit b77ed78):
    • git clone https://github.com/kubernetes/kubernetes.git

Started up the docker daemon with:

sudo systemctl enable docker && sudo systemctl start docker

 

After trying this whole E2E process on a fresh VM setup, I found that when I tried to run the tests, the ginkgo app was not found in the expected areas under _output/. To remedy this, I did “make”, which builds everything, including ginkgo, and places it in _output/local/go/bin/ginkgo. Maybe there is a way to just build ginkgo, but for now, this works.

 

Starting The Cluster

From my Kubernetes repo area, ~/go/src/k8s.io/kubernetes, I made sure that etcd was installed and PATH was updated, as suggested:

hack/install-etcd.sh
export PATH=$PATH:`pwd`/third_party/etcd

 

Next, build hyperkube and kubectl:

make WHAT='cmd/hyperkube cmd/kubectl'

 

You can then start up the cluster, using:

sudo LOG_LEVEL=4 API_HOST=10.0.2.15 ENABLE_RBAC=true -E PATH=$PATH \
     ./hack/local-up-cluster.sh -o _output/bin/

The API_HOST is set to the node’s main interface, which for a VirtualBox VM is usually 10.0.2.15. If you are running under root account (I haven’t tried that, so YMMV), you won’t need sudo and the “-E PATH=$PATH” clause. Feel free to use a different LOG_LEVEL, if desired, too.

 

Running Tests

Once everything is up, you’ll get a message on how to use the cluster from another window. So, I opened another terminal window, did “vagrant ssh” to access my VM, and changed to the Kubernetes directory. I did these commands to prepare for a test run:

sudo chown -R vagrant /var/run/kubernetes $HOME/.kube
export KUBERNETES_PROVIDER=local
export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig

 

Where my user is “vagrant”. To prevent e2e failures, @ncdc told me to always do the chown command, after stopping and then restarting the cluster.

The cluster can be examined with kubectl script, like “cluster/kubectl.sh get nodes”

Now, you can run the end-to-end tests, with your desired ginkgo.focus. Here is an example:

go run ./hack/e2e.go -- -v -test -test_args '--ginkgo.v --ginkgo.focus Kubectl.expose'

 

At the end of the run, you’ll see this type of output:

Ran 1 of 631 Specs in 28.812 seconds
SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 630 Skipped PASS

Ginkgo ran 1 suite in 29.156205396s
Test Suite Passed
2017/05/30 13:40:55 util.go:131: Step './hack/ginkgo-e2e.sh -ginkgo.v -ginkgo.focus Kubectl.expose' finished in 29.254525787s
2017/05/30 13:40:55 e2e.go:80: Done

 

I ran Conformance tests, with:

go run ./hack/e2e.go -- -v -test -test_args '--ginkgo.v --ginkgo.focus \[Conformance\]'

 

At the end of the output, I could see the test results:

Ran 148 of 646 Specs in 3493.031 seconds
FAIL! -- 127 Passed | 21 Failed | 0 Pending | 498 Skipped --- FAIL: TestE2E (3493.05s)

 

All of these failures were under framework/pods.go and related to Volumes. Not sure what is wrong, but looks like some were failures to create pods due to security context:



Learning How To Focus

The ginkgo.focus argument is a regular expression that maps to the It() clauses in code in test/e2e/*. You cannot use quotes or spaces (so, use \s). For example, if I see that there are test cases that use the word Selector:

git grep Selector | egrep "It[(]"
test/e2e/network_policy.go: It("should enforce policy based on PodSelector [Feature:NetworkPolicy]", func() {
test/e2e/network_policy.go: It("should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]", func() {
test/e2e/network_policy.go: It("should enforce policy based on NamespaceSelector [Feature:NetworkPolicy]", func() {
test/e2e/scheduling/predicates.go: It("validates that NodeSelector is respected if not matching [Conformance]", func() {
test/e2e/scheduling/predicates.go: It("validates that NodeSelector is respected if matching [Conformance]", func() {
test/e2e/scheduling/predicates.go: It("validates that a pod with an invalid podAffinity is rejected because of the LabelSelectorRequirement is invalid", func() {

I can run the test with:

go run ./hack/e2e.go -- -v -test -test_args '--ginkgo.v --ginkgo.focus Selector'

 

 

It shows this near the end of the output:
Summarizing 2 Failures:

[Fail] [k8s.io] NetworkPolicy [It] should enforce policy based on NamespaceSelector [Feature:NetworkPolicy]
/home/vagrant/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network_policy.go:356

[Fail] [k8s.io] NetworkPolicy [It] should enforce policy based on PodSelector [Feature:NetworkPolicy]
/home/vagrant/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network_policy.go:356

Ran 6 of 646 Specs in 399.804 seconds
FAIL! -- 4 Passed | 2 Failed | 0 Pending | 640 Skipped --- FAIL: TestE2E (399.83s)

 

Being a regular expression, I could refine this to only running the three tests in networking_policy.go. First, I saw that the desired tests were under this section:

var _ = framework.KubeDescribe("NetworkPolicy", func() {
f := framework.NewDefaultFramework("network-policy")
...
    It("should enforce policy based on PodSelector [Feature:NetworkPolicy]", func() {
    ...

I then modified the test to run with this focus:

go run ./hack/e2e.go -- -v -test -test_args '--ginkgo.v --ginkgo.focus NetworkPolicy.*Selector'
...
Summarizing 2 Failures:

[Fail] [k8s.io] NetworkPolicy [It] should enforce policy based on NamespaceSelector [Feature:NetworkPolicy]
/home/vagrant/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network_policy.go:356

[Fail] [k8s.io] NetworkPolicy [It] should enforce policy based on PodSelector [Feature:NetworkPolicy]
/home/vagrant/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network_policy.go:356

Ran 3 of 646 Specs in 156.527 seconds
FAIL! -- 1 Passed | 2 Failed | 0 Pending | 643 Skipped --- FAIL: TestE2E (156.61s)

 

There are some example labels that can be used for the focus (and some examples that you could use are on that page as well). Hint: I wouldn’t run the test without any focus set…it takes a really long time.

When you are all done, in the first window, just press control-C to shutdown the cluster. Don’t forget to do the chown command above, if you restart the cluster.

Important Notes

A few things I found out…

Having a large disk drive will be important, as it is not easily resizable with Vagrant. I found that 40 GB was more than enough. Some vagrant boxes are only 20GB, and I’ve run out of space after using it for a while.

 

Be sure when you run the test, that you have the -v option after the double dash, or specify it inside the test_args string.

 

If you are changing your code, and then want to retest, you can run make for just cmd/hyperkube, and then re-run local-up-cluster.sh. Hyperkube is an all-in-one binary with kube-apiproxy, kubelet, kube-scheduler, kube-controller-manager, and kube-proxy.

 

You can use this setup for development work, although you’ll likely want to include additional tools, and maybe even play with kubeadm.

 

As of 5/31/2017, there is a bug in the tests that is preventing kubelet from starting. The fix is being worked under PR 46709. In the meantime though, you can start up the cluster with this:

sudo FEATURE_GATES=AllAlpha=false LOG_LEVEL=4 API_HOST=10.0.2.15 ENABLE_RBAC=true -E PATH=$PATH ./hack/local-up-cluster.sh -o _output/bin/

UPDATE: On 6/8/2017, I pulled the latest kubernetes and didn’t have to use this temp fix, so the change is upstreamed now.

 

Category: Kubernetes | Comments Off on End-To-End Testing
May 11

I’ve pushed up a Kubernetes change for review… now what?

Updated  V2 – 6/8/2017

I’m assuming you’ve already gone to the Community page on contributing, for information on how to find issues to work on, how to build and test Kubernetes, signing the Contributors License Agreement (CLA), and followed the link on how to do a pull request.

You code is up there, ready for review. Now what?

How do you know who is reviewing the code, and what the exact steps are?

What if it is not getting reviewed?

Hopefully, the notes here will help…

By the way…

If it’s your first commit, I highly recommend doing something super simple, so that you can get the process down, and not have something technically challenging as well. Using labels, you can look for low-hanging-fruit issues or issues for new contributors.

Pull Request Posted…Now What?

Once you’ve forked the Kubernetes repo, pushed your changes up to your repo, clicked on the “Compare and Pull Request”, and filled out the pull request, you’ll see several things happen.

The k8s-robot will take some actions to make sure that you have a signed CLA…

And that the commit needs an OK to be tested…

The k8s-reviewer bot may leave a comment indicating that the PR is “reviewable” over at reviewable.kubernetes.io (at time of this posting, this seems to be enabled only occasionally):

Lastly, the bot will assign a reviewer and provide some useful info…

Note that it indicates the OWNERS file. You can go to that file and see the approvers, and depending on the file, reviewers that could review the commit.

What do you do, if the reviewer doesn’t respond to the review?

From the kubernetes-dev Google group, Erick Fejta gave three suggestions. Here’s some elaboration on them…

 

Un-assign the current reviewer

To do this, you can add a comment for the review that looks like this:

Now, I did this, but forgot the ‘@’ symbol. It didn’t do anything, and folks pointed out that going back and editing the comment to add the at-sign would not help, because the bot only looks at new comments. Whoops.

 

Use the slack channel

  • Sign up for the Kubernetes “Team”, by going to slack.kubernetes.io. Once done, you can login at kubernetes.slack.com.
  • Find the channel for the code you are changing. In my case, I had changed code in k8s.io/apimachinery/pkg/… so I used the sig-api-machinery channel.
  • Ask on the channel for a recommendation of reviewers
  • Use the /assign comment (again with an at-sign before the user name(s) to assign the review to people recommended).

You can also look for the reviewer in Slack, and when they are available, touch base with them to see if they have bandwidth to review. Thats what I did in one case, and the person indicated they were wicked busy.

 

Use the Owners file

Above I showed that the bot indicated the OWNERS file. For one of my commits, it had this OWNERS file for federation with a bunch of reviewers. Some, like the apimachinery one, may only have approvers shown (which I guess double as reviewers).

Now, you could randomly assign from that list, but a better method it to take into consideration the work load of the reviewers. To do that, go to the Pull Request Dashboard and specify the reviewer’s name. You’ll get a page that has this right below the page banner, along with a list of the reviews the person is working on:

As you can see, ‘lavalamp’ was pretty busy, at the time of this review. You can enter other people’s names from the OWNERS file, into the text field and see how busy they are to make a better decision. I suspect you could even try to ping them on Slack to see if they have time to review. Also, you can click on “Me” and see your outgoing commits/reviews.

How To House Train Your Bot

There is info on the various bot commands, what they do, and who can use them. For example, I had a case where I put NONE in the release note section of the PR form, instead of “`NONE“`, and the bot had labeled my PR as needing a release note. I did the following, and the bot then removed the incorrect label and added the correct one:

No CNCF-CLA label?

I had one commit, where the k8s-ci-robot didn’t add in the cncf-cla label, like it should have. Here’s what you’d normally see:

In this commit, I had forgotten to add the “fixed #12345” in the first commit comment to indicate the issue fixed, but from what I hear, this should not affect the labelling, which should use the author and email address. I checked against other commits, and the info was the same.

I’m not sure why the bot didn’t add the label (even with a “no” instead of “yes”). Erick Fejta tried closing and reopening the PR, but that didn’t seem to work.

I did a rebase with Kubernetes master and then pushed up the code again, and this time, the bot added the label for “cncf-cla: yes”. One thing I had done differently, was that I was using a new VM for local development and in that VM, I had the remote for my kubernetes repo set up using https: protocol. I had, however, set up github to use SSH keys (forgot about that, when setting up the VM).

I changed the remote to use git: protocol, like this:

git remote set-url origin git@github.com:pmichali/kubernetes.git

I then re-pushed the change to my github on the same branch (with the -f option to force). The PR got the new commit, and the bot added the label!

 

Label Me Purple

There is a way to add labels to an issue. I saw the following done by a contributor (can be done by anyone):

As you can see, it added a label to the issue. There is also a “/area” bot command, which also creates a label. You can see the available labels.

 

Can I Kick Off Testing?

Well, it depends… If you are a Kubernetes member, you can do a “@k8s-bot ok to test” comment. For newbies, you won’t be a member, and will need to wait for a member to do this command to enable testing of the pull request.

Granted, if one of your tests fail, you can use the directions in the bot comment to resubmit that specific test. For example, I had a test failure reported and I just cut and pasted in the mentioned “@k8s-bot pull-kubernetes-federation-e2e-gce test this” to re-check this, once the failure (a flake) was fixed.

There is also a “/retest” comment that can be entered to have all the tests re-run.

See my blob entry on test infra tools for info on looking at test results, checking on the health of test jobs, etc.

 

What If My Code Is Not Ready For Review?

If you want to create a PR of some work-in-progress code, so people can see your changes, but are not ready for review, you can add the prefix “WIP” to the subject.

Looks Good To Me

Once the reviewer is happy with your changes, they’ll mark it as lgtm:

If an approver is not already assigned you, or the reviewer will want to assign them (you can refer to the OWNERS list):

Once they approve, you’ll see a bunch of bot activity to invoke the tests and complete the process, merging in the PR(assuming all things go well):

It’ll also provide a button to allow you to remove your branch, as it is no longer needed:

 

What’s Reviewable?

I noticed that some people go to the”Files changed” tab, and then add comments to the code. Other’s click on the Reviewable button on the page:

This takes you to a page, where you can review the code, add comments, acknowledge changes, indicate done, see what files have been reviewed and by whom, and see all the revisions. I’m still trying to figure out all the ins and outs of this tool, and will try to add notes here. TODO

Note, however, not all PRs will have the “reviewable” button (I’m not sure it is enabled all the time – maybe its use is in beta?).

One thing I see at the bottom of the page at the reviewable.kubernetes.io page for my commit is:

The last link indicates to open an issue with the Kubernetes repo to connect it to Reviewable. Not sure if I should make that request. Maybe someone can comment.

 

Category: Kubernetes | Comments Off on I’ve pushed up a Kubernetes change for review… now what?