Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the harbor integration Clair problem #3305

Closed
Lynzabo opened this issue Sep 26, 2017 · 12 comments
Closed

About the harbor integration Clair problem #3305

Lynzabo opened this issue Sep 26, 2017 · 12 comments
Labels

Comments

@Lynzabo
Copy link

Lynzabo commented Sep 26, 2017

My test environment is as follows:
Docker:17.07.0-ce
Docker-compose:1.16.1
harbor:1.2.0
Because I want to integrate Clair into harbor, as per documentation, The harbor.cfg file only modifies the following items:

Hostname = 192.168.10.33
Clair_db_password = 123456

I start harbor using the following command.

$ sudo ./install.sh --with-clair

I can get the below message:

...
   Start complete. You can visit harbor now.

View service startup status:

root@lclair2:~/harbor/harbor# docker-compose -f ./docker-compose.yml -f ./docker-compose.clair.yml ps
       Name                     Command               State                                Ports                              
------------------------------------------------------------------------------------------------------------------------------
clair                /clair2.0.1/clair -config  ...   Up      6060/tcp, 6061/tcp                                              
clair-db             /entrypoint.sh postgres          Up      5432/tcp                                                        
harbor-adminserver   /harbor/harbor_adminserver       Up                                                                      
harbor-db            docker-entrypoint.sh mysqld      Up      3306/tcp                                                        
harbor-jobservice    /harbor/harbor_jobservice        Up                                                                      
harbor-log           /bin/sh -c crond && rm -f  ...   Up      127.0.0.1:1514->514/tcp                                         
harbor-ui            /harbor/harbor_ui                Up                                                                      
nginx                nginx -g daemon off;             Up      0.0.0.0:443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:80->80/tcp
registry             /entrypoint.sh serve /etc/ ...   Up      0.0.0.0:15000->5000/tcp                                         
root@lclair2:~/harbor/harbor# 

But when I open the browser, Clair doesn't work properly.
_20170926150514
_20170926150519
_20170926150611

@reasonerjt
Copy link
Contributor

This is known limitation for Chinese users, Clair will pull vuln data from internet, if the connection is poor it may take very long before the data are ready.

@Lynzabo
Copy link
Author

Lynzabo commented Sep 27, 2017

It should not be clair problems, I manually modified the docker-compose.clair.yml file, let clair exposed port out. Waiting for a long time, clair finally finished over, I can see in the browser:
_20170927130302
_20170927130312

Then I used the analyze-local-images tool to successfully test the clair.
_20170927130356
_20170927130400

But Harbor is still being given.
_20170927130419
_20170927130423

@reasonerjt
Copy link
Contributor

reasonerjt commented Sep 27, 2017

That is not correct, Clair can trigger scan when the data is still incomplete.

But only after all updaters finish successfully the warning will disappear.
If you check the log of Clair, you will see some updater is not finished.

@Lynzabo
Copy link
Author

Lynzabo commented Sep 27, 2017

clair is really not completely updated, but it does not affect the vulnerability of the scan, but the scan results are not complete.

Now there is a new problem, when I push an official redis mirror come in, the error has become the following:
_20170927173133
And the jobservice service reported the following error:

harbor-jobservice | 2017-09-27T08:53:57Z [WARNING] Panic when handling job: {JobID: 13, JobType: Scan}, panic: runtime error: invalid memory address or nil pointer dereference, entering error state

@Lynzabo
Copy link
Author

Lynzabo commented Sep 27, 2017

The problem is solved. Thank you!

@Lynzabo Lynzabo closed this as completed Sep 27, 2017
@reasonerjt
Copy link
Contributor

Did you trigger scan after Clair initializing the CVE data?
Is it reproducible?
Is this image on docker hub?

@Lynzabo
Copy link
Author

Lynzabo commented Sep 28, 2017

This function is basically available, but not all mirrors can be used. I tested three mirrors and the results are as follows:
The three mirrors are directly from the docker hub pulled down, just added a tag.The three mirrors using the clairctl tool are all available directly,But using harbor to get the results is very bad.
First ,
192.168.10.31/cloudm/redis:1.0.1,It is through.

Second,
192.168.10.31/cloudm/nginx:1.11.5,Its error message is as follows:

2017-09-28T03:58:20Z [INFO] Entered scan initializer
2017-09-28T03:58:20Z [INFO] Image: cloudm/nginx:1.11.5, digest: sha256:dfcfdd7139d8ec531c1ffc9ab31ef0940428bca58e336ab38f67c176e31feb16
2017-09-28T03:58:20Z [INFO] Entered scan layer handler, current: 0, layer name: 3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112, layer path: http://registry:5000/v2/cloudm/nginx/blobs/
2017-09-28T03:58:20Z [ERROR] [handlers.go:119]: Unexpected error: Unexpected status code: 400, text: {"Error":{"Message":"could not find layer"}}

Three,
192.168.10.31/cloudm/photon:1.0,Its error message is as follows:

2017-09-28T02:24:05Z [INFO] Entered scan initializer
2017-09-28T02:24:05Z [INFO] Image: cloudm/photon:1.0, digest: sha256:50a8b9da098cf416bfb15183d19462158bfaab862d82decbf9657cfba177d250
2017-09-28T02:24:05Z [INFO] Entered scan layer handler, current: 0, layer name: d6a375c016b8d6876a3f5debd48f71e0a6c56715be0fe006eded75d0e7b49d76, layer path: http://registry:5000/v2/cloudm/photon/blobs/sha256:de257cbc428c5188e8b0e097b11b13a840b726d212499a0c85f0b3e081e395d1
2017-09-28T02:24:08Z [ERROR] [handlers.go:110]: Unexpected error: Unexpected status code: 422, text: {"Error":{"Message":"worker: OS and/or package manager are not supported"}}

I print them when calling clair, clair's output log:

192.168.10.31/cloudm/registry:2.3.0

clairctl:

clair          | {"Time":"2017-09-28 07:58:27.913313","elapsed time":868951,"method":"POST","remote addr":"192.168.10.34:43879","request uri":"/v1/layers","status":"201"}
clair          | {"Time":"2017-09-28 07:58:28.222453","elapsed time":739554,"method":"POST","remote addr":"192.168.10.34:43881","request uri":"/v1/layers","status":"201"}
clair          | {"Time":"2017-09-28 07:58:28.524699","elapsed time":1949218,"method":"POST","remote addr":"192.168.10.34:43883","request uri":"/v1/layers","status":"201"}
clair          | {"Time":"2017-09-28 07:58:28.830718","elapsed time":1223403,"method":"POST","remote addr":"192.168.10.34:43885","request uri":"/v1/layers","status":"201"}
clair          | {"Time":"2017-09-28 07:58:29.137830","elapsed time":1589017,"method":"POST","remote addr":"192.168.10.34:43887","request uri":"/v1/layers","status":"201"}
clair          | {"Time":"2017-09-28 07:58:29.443693","elapsed time":1832522,"method":"POST","remote addr":"192.168.10.34:43889","request uri":"/v1/layers","status":"201"}
clair          | {"Event":"Namespace unknown","Level":"warning","Location":"worker.go:211","Time":"2017-09-28 07:58:30.007971","feature name":"zlib","feature version":"1:1.2.8.dfsg-2","layer":"sha256:a79b4a92697e40ba4fc72102418aefa96c75a91c60bc58c85a354280854e570c"}
clair          | {"Time":"2017-09-28 07:58:30.008202","elapsed time":264523172,"method":"POST","remote addr":"192.168.10.34:43891","request uri":"/v1/layers","status":"422"}
clair          | {"Time":"2017-09-28 07:58:30.331650","elapsed time":2788273,"method":"POST","remote addr":"192.168.10.34:43893","request uri":"/v1/layers","status":"201"}
clair          | {"Time":"2017-09-28 07:58:30.636278","elapsed time":1647184,"method":"POST","remote addr":"192.168.10.34:43895","request uri":"/v1/layers","status":"201"}

clair          | {"Time":"2017-09-28 07:58:30.646617","elapsed time":9023350,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:fdd5d7827f33ef075f45262a0f74ac96ec8a5e687faeb40135319764963dcb42?vulnerabilities","status":"200"}
clair          | {"Time":"2017-09-28 07:58:30.654504","elapsed time":3705550,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4?vulnerabilities","status":"200"}
clair          | {"Time":"2017-09-28 07:58:30.655555","elapsed time":562959,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:a79b4a92697e40ba4fc72102418aefa96c75a91c60bc58c85a354280854e570c?vulnerabilities","status":"404"}
clair          | {"Time":"2017-09-28 07:58:30.660532","elapsed time":4301151,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:1881c09fc7347ec80cedfc0318ab1d24c6976fcc332f4cf226ebb1af357aae61?vulnerabilities","status":"200"}
clair          | {"Time":"2017-09-28 07:58:30.665711","elapsed time":4659839,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:0f24f5ab4e0371dacc3f87e15c4c2bebc22beb30288b5d38c20ea43af32ad9ae?vulnerabilities","status":"200"}
clair          | {"Time":"2017-09-28 07:58:30.671799","elapsed time":3940532,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4?vulnerabilities","status":"200"}
clair          | {"Time":"2017-09-28 07:58:30.675337","elapsed time":3165970,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4?vulnerabilities","status":"200"}
clair          | {"Time":"2017-09-28 07:58:30.686156","elapsed time":10093251,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4?vulnerabilities","status":"200"}
clair          | {"Time":"2017-09-28 07:58:30.691384","elapsed time":2897633,"method":"GET","remote addr":"192.168.10.34:43896","request uri":"/v1/layers/sha256:a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4?vulnerabilities","status":"200"}

Harbor

clair          | {"Event":"could not download layer: expected 2XX","Level":"warning","Location":"driver.go:135","Time":"2017-09-28 08:08:50.702485","status code":404}
clair          | {"Event":"failed to extract data from path","Level":"error","Location":"worker.go:122","Time":"2017-09-28 08:08:50.702592","error":"could not find layer","layer":"3973e022e93220f9212c18d0d0c543ae7c309e46640da93a4a0314de999f5112","path":"http://registry:5000/v2/cloudm/registry/blobs/"}
clair          | {"Event":"Handled HTTP request","Level":"info","Location":"router.go:57","Time":"2017-09-28 08:08:50.702829","elapsed time":2129157,"method":"POST","remote addr":"172.20.0.6:56980","request uri":"/v1/layers","status":"400"}

Through the output of clair will be able to see the number of layers they get different, the problem should appear here.

@Lynzabo Lynzabo reopened this Sep 29, 2017
@Lynzabo
Copy link
Author

Lynzabo commented Sep 29, 2017

I know the answer. These test mirrors them into different versions of the manifest. I have modified the code, and now also need to test it.

@reasonerjt
Copy link
Contributor

reasonerjt commented Sep 30, 2017

@Lynzabo thanks for the update.

However, I'm very confused with your comment, by mirror, do you mean image?

As for your third test: 192.168.10.31/cloudm/photon:1.0
You mentioned Harbor failed to scan it but clairctl succeeded, what's the output of clairctl?

What version of Clair are you using? I don't think the client is doing the right thing, if it can't decide the "namespace", i.e. the linux distro of the image, how did it figure out the vulnerabilities? Where the data were from?

You pasted 4 sections of Harbor's output, but only one section of clairctl's output, is it for all the images?

@Lynzabo
Copy link
Author

Lynzabo commented Sep 30, 2017

Sorry, this third mirror uses those clients that do not work. I modified the code, the second mirror can be used.#3347

@lujay
Copy link

lujay commented Aug 27, 2018

@Lynzabo hi Lynzabo,thanks your comment ,but as Im new for this ,could you please tell me how you reslove it , just optimize my local network to access it? I met the same problem,and my env is
Docker version 18.06.0-ce, build 0ffa825
ocker-compose version 1.22.0
harbor 1.6.0
will be very appreciate

image
image

@stale
Copy link

stale bot commented Nov 25, 2018

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the Stale label Nov 25, 2018
@stale stale bot closed this as completed Dec 16, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants