Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

May be a bit confused and multiple questions. #203

Closed
ididitmyway opened this issue Jun 16, 2021 · 34 comments
Closed

May be a bit confused and multiple questions. #203

ididitmyway opened this issue Jun 16, 2021 · 34 comments

Comments

@ididitmyway
Copy link

Hi there,
I have a installed and running scancode server from docker-compose on a Ubuntu LTS 20.04 Server.

I successfully scaned a open repository in github, but I still have some questions left.

1 How can I scan a repository where authorisation is required.

2 How can I rescan a repository?

3 How can I automatically create a new Scanpipeline with a new repository?

4 How can I trigger / retrigger the Scan from a CI/CD like, Jenkins, TeamCity or Azure Devops?

5 How can I compare the results? / Can I get a report, from only the things that changed by the last scan?

6 How can I generate a readable documentation for my customers? (only containing the nessesary informations)

Kind Regards
Ben

@tdruez
Copy link
Member

tdruez commented Jun 16, 2021

Hi Ben

I successfully scaned a open repository in github

Well done!

1 How can I scan a repository where authorisation is required.

What kind of authorization? We do not have specific support for auth yet but we are planning to implement in the future. It'll be good to know a bit more about your specific requirements.
For now, you can download the file locally on your machine through your usual auth system and upload that file on project creation.

2 How can I rescan a repository?

Do you mean rescan the same exact code or rather scan another version of that code?
You can create a new Project if you want to keep the results from the previous scan.

3 How can I automatically create a new Scanpipeline with a new repository?

You can use the REST API to automate your project/pipeline managements, see https://scancodeio.readthedocs.io/en/latest/scanpipe-api.html (we definitely need to improve that part of the documentation)
Everything available through the web UI is also available in the API.

4 How can I trigger / retrigger the Scan from a CI/CD like, Jenkins, TeamCity or Azure Devops?

Same as above, the API can be used for such automation.

5 How can I compare the results? / Can I get a report, from only the things that changed by the last scan?

You can only download full reports (json, xlsx) at the moment. Could you enter a new issue in this repository explaining a bit more in details your needs, we can discuss implementation there. See also our tool DeltaCode https://github.com/nexB/deltacode that we use to compare scans.

6 How can I generate a readable documentation for my customers? (only containing the nessesary informations)

Could you provide an example of the content you'd like to generate?

@ididitmyway
Copy link
Author

What kind of authorization? We do not have specific support for auth yet but we are planning to implement in the future. It'll be good to know a bit more about your specific requirements.
For now, you can download the file locally on your machine through your usual auth system and upload that file on project creation.

We have several repositories, and we want automatically scan them and check automatically if there was added a new licence in a period of two weeks.
We want also scan new repositories automatically.
We use azure devops (b.t.w. it is free for open source, you can use it as ci-cd

Here are some Open Source Project´s that use AzureDevops as Pipeline for ci-cd

	Atom 			https://dev.azure.com/github/Atom/_build?definitionId=1
	CPython 		https://dev.azure.com/python/cpython/_build?definitionId=9
	Pipenv 			https://dev.azure.com/pypa/pipenv/_build?definitionId=12
	Tox 			https://dev.azure.com/toxdev/tox/_build?definitionId=9
	Visual Studio Code 	https://dev.azure.com/vscode/VSCode/_build?definitionId=1
	TypeScript 		https://dev.azure.com/typescript/TypeScript/_build?definitionId=14

5 How can I compare the results? / Can I get a report, from only the things that changed by the last scan?

You can only download full reports (json, xlsx) at the moment. Could you enter a new issue in this repository explaining a bit more in details your needs, we can discuss implementation there. See also our tool DeltaCode https://github.com/nexB/deltacode that we use to compare scans.

thanks, I will take a look to this tool

anything else

thanks for your help and your questions, we decided to take the docker container instead of the scancode.io server.
We will scan each Repository and will check in the the scan results in another repository to compare it.

I created some shell script´s to do so, here you are ->

Clone Repos

#this script request a username and password and clone all repos in the repos.tx
#!/bin/bash
# Run Command
echo  "start cloning"
server="<your git host>";
project="<your project>";

git login server

while read p;
do
  echo "git clone ${server}/${project}/_git/${p} user -> ${username}"
  mkdir "${p}"
  git clone "${server}/${project}/_git/${p}" "${p}" 
done <repos.txt

Scan Repos

#this script starts a docker container to scan  all repos in the repos.tx
#!/bin/bash
while read p;
do
  echo "start scan ${p}"
  sudo docker run -v $PWD/$p/:/mytemp -l repo1 scancode-toolkit:21.3.31 \
  -clpeui -n 1 --json-pp /mytemp/myresult$p.json /mytemp \
  --license-text --csv /mytemp/myresult$p.csv \
  --html /mytemp/myresult$p.html \
done <repos.txt

@ddmesh
Copy link

ddmesh commented Jul 21, 2021

Hi tdruez,

you mentioned https://scancodeio.readthedocs.io/en/latest/scanpipe-api.html. I really have problems to get anything working.
After starting scancode.io with docker-compose I can access http://localhost/api/projects. But I always get
"Authentication credentials were not provided."

I then found the url http://localhost/admin/. After more try and error I figured out that I can create a superuser account with
./manage.py createsuperuser
From there I then could login into the admin page, create users and tokens. But when I specify users on curl or http-tool, I still get the same error.

Unfortunately I could not find any documentation about the usage of REST API or the defined URLs. When searching with google it seems that no-one is using the REST API of the scancode.io at all.

I then searched the source code and found some url-files defining urlpatterns.

./scancodeio/urls.py
api_router = DefaultRouter()
api_router.register(r"scans", ScanViewSet)
api_router.register(r"projects", ProjectViewSet)
api_router.register(r"runs", RunViewSet)

urlpatterns = [
    path("admin/", admin.site.urls),
    path("api/", include(api_router.urls)),
    path("license/", include(licenses.urls)),
    path("", include("scanpipe.urls")),
    path("", RedirectView.as_view(url="project/")),
]

By trying some combinations I got following:

I then found another urlpattern file: ./scanpipe/urls.py

urlpatterns = [
    path(
        "project//resources//raw/",
        views.CodebaseResourceRawView.as_view(),
        name="resource_raw",
    ),
    path(
        "project//resources//",
        views.CodebaseResourceDetailsView.as_view(),
        name="resource_detail",
    ),
 ....

I tried urls like http://localhost/api/project/... but any request tells me that url does not exisit.
I also could not figure out with what data I should replace the place holder.

I then looked ad https://www.django-rest-framework.org/tutorial/4-authentication-and-permissions/#authenticating-with-the-api
to get any hint how to use authentication, pass token or user credentials. But scancode.io always says "Authentication credentials were not provided."

Questions:

  • how are credentials/tokens passed to access api (I have added all permissions to all created users in http://localhost/admin/)
  • what are the complete URL format for all supported commands of REST API
  • Are there specific http headers, that must be provided?
  • What commands should use GET/POST requests?

Can someone please provide some documentation and examples? Please!
I do not know how to interpret the little information at https://scancodeio.readthedocs.io/en/latest/scanpipe-api.html.

Thanks a lot.
Stephan

@tdruez
Copy link
Member

tdruez commented Jul 21, 2021

Hi @ddmesh I've updated the default settings to not enable the Authentication system in the API.
You should be able to access http://localhost/api/ without the credentials using the latest code from the main branch.

Create a new project using the REST API

  • Using cURL:
curl -X POST  http://localhost/api/projects/ \
    -H "Content-Type: application/json" \
    --data '{"name": "project_name", "input_urls": "https://download.url/package.archive", "pipeline": "scan_package", "execute_now": true}'
  • Using Python and the requests library:
import requests

api_url = 'http://localhost/api/projects/'
headers = {
    'Accept': 'application/json; indent=4',
}
data = {
    "name": "project_name1",
    "input_urls": "https://download.url/package.archive",
    "pipeline": "scan_package",
    "execute_now": True,
}
response = requests.post(api_url, headers=headers, data=data)
print(response.json())

@ddmesh
Copy link

ddmesh commented Jul 21, 2021

Hi,

Thanks. I didn't know that I have to pass the parameters as json data.

About Authentication: I got a little further. The created token (http://localhost/admin/) could be used at command line as followed:
http --debug GET http://localhost/api/projects/ 'Authorization:Token 4dc5aed25414bedbce603dea5b5e0ac33d474c25'
or
curl -X GET http://localhost/api/projects/ -H 'Authorization:Token 4dc5aed25414bedbce603dea5b5e0ac33d474c25' | jq

Can you tell me where I can enable the authentication again, because I intend to use it later in an environment where I need authentication.

Can you give me also the commands/format (json data) that I need to use to run each api command separately?
How can I download the output json files for a project or request the current scan status (if still busy)?

How do I upload a project.tgz instead of providing an imput URL?

Thanks a lot
Stephan

@tdruez
Copy link
Member

tdruez commented Jul 21, 2021

@ddmesh

Can you tell me where I can enable the authentication again, because I intend to use it later in an environment where I need authentication.

Sure, see ebd9fe3
You can set "DEFAULT_PERMISSION_CLASSES": ("rest_framework.permissions.IsAuthenticated",),

Can you give me also the commands/format (json data) that I need to use to run each api command separately?

We need to raise the priority on improving the API docs, but I the mean time, you can check the following:
From the browsable API view at /api/projects/, click on the OPTIONS button and you'll get details about each "actions". You can also browse the project list and project details to get and idea about the available fields.

When creating a project the response will provide a url value in the returned data. You can make a GET request on this URL to get all the information you need about a project, including the status of the pipeline run:

{
    "name":"project_name",
    "url":"/api/projects/b0c92a72-6c01-461d-993a-5360c30d7937/",
    "uuid":"b0c92a72-6c01-461d-993a-5360c30d7937",
    "created_date":"2021-07-21T16:06:29.132795+02:00"
    [...]
}

How can I download the output json files for a project or request the current scan status (if still busy)?

Use the results action: /api/projects/b0c92a72-6c01-461d-993a-5360c30d7937/results/

Screenshot 2021-07-21 at 6 37 52 PM

How do I upload a project.tgz instead of providing an imput URL?

curl -X POST  http://localhost/api/projects/ \
    -F name=project_name \
    -F upload_file=@file.ext \
    -F pipeline=scan_codebase \
    -F execute_now=true
import requests

api_url = 'http://localhost/api/projects/'
headers = {
    'Accept': 'application/json; indent=4',
}
data = {
    "name": "project_na21me1",
    "pipeline": "scan_package",
    "execute_now": True,
}
with open('your_file_location.ext', 'rb') as f:
    response = requests.post(api_url, headers=headers, data=data, files={"upload_file": f})

print(response.json())

@ddmesh
Copy link

ddmesh commented Jul 22, 2021

Hi and thanks.

My next steps were:

- git pull
- docker-compose build
- docker-compuse up

Because you have disabled the authentication, I now see the "Option" button and "Extra Actions" drop-down.
Before this, I couldn't pass the Token via browser, only via command line tools like curl.

via http://localhost/api/projects/b0c92a72-6c01-461d-993a-5360c30d7937/ I get the selected project and can now try other commands/urls by playing with the "Extra Actions" and Buttons.

About DEFAULT_PERMISSION_CLASSES, should I revert this settings or is there a "user-config" file that I have to use
to overwrite this settings, when I need to enable authorization again?

Thanks a lot

@tdruez
Copy link
Member

tdruez commented Jul 22, 2021

About DEFAULT_PERMISSION_CLASSES, should I revert this settings or is there a "user-config" file that I have to use
to overwrite this settings, when I need to enable authorization again?

With this last change @ 97779b0 you can now easily enable the Authentication by adding the following line in your local .env file:

REST_FRAMEWORK_DEFAULT_PERMISSION_CLASSES=rest_framework.permissions.IsAuthenticated

Thanks for all your feedback, I'll move all this content in the API documentation.

@ddmesh
Copy link

ddmesh commented Jul 22, 2021

super, thanks a lot

@ddmesh
Copy link

ddmesh commented Jul 22, 2021

Hi,

when I created a project via REST API but did not specify input_url nor upload and keep execute_now:false, the project is created.

How can I then add for instance two pipelines via REST API and later how do I start executing?

Can you provide me with those two curl examples?
Thanks a lot.
Stephan

@tdruez
Copy link
Member

tdruez commented Jul 22, 2021

How can I then add for instance two pipelines via REST API and later how do I start executing?

api_url="http://localhost/api/projects/9cdaa9be-e48c-4265-b94e-5457784df60f/add_pipeline/"
headers="Content-Type: application/json"

curl -X POST "$api_url" -H "$headers" --data '{"pipeline": "scan_package", "execute_now": false}'

curl -X POST "$api_url" -H "$headers" --data '{"pipeline": "a_pipeline", "execute_now": true}'

To start and existing pipeline pre-added to a project, find it's API URL on the "runs": [ entry of the project details, and POST on its start_pipeline action:

api_url="http://localhost/api/runs/6ded4e0e-cf65-40c4-883c-3cd038262327/start_pipeline/"
headers="Content-Type: application/json"

curl -X POST "$api_url" -H "$headers"

@ddmesh
Copy link

ddmesh commented Jul 22, 2021

Thanks, this sounds good, thanks also for the fast immediate support. :-)

@ddmesh
Copy link

ddmesh commented Jul 22, 2021

When starting a pipeline with curl, you should use GET instead of POST (post ist not supported, but GET is working).

@tdruez
Copy link
Member

tdruez commented Jul 22, 2021

When starting a pipeline with curl, you should use GET instead of POST (post ist not supported, but GET is working).

You are right, this action is only enabled on GET, we may want to change this to a POST.

@ddmesh
Copy link

ddmesh commented Jul 23, 2021

Is there a way to upload inputs if a project already exists?

curl -X POST  http://localhost/api/projects/ \
    -F name=project_name \
    -F upload_file=@file.ext \
    -F pipeline=scan_codebase \
    -F execute_now=true

This call creates a project and if it already exists, I get an error.
I'm looking for a call like
curl -X POST http://localhost/api/projects/77ec1a1d-bcf0-41c0-b005-d7a23e5863fa/add_inputs/ -H 'Content-Type: application/json' -H "Content-Type: application/octet-stream" --data-binary "@/tmp/scancode/project.tgz"

  • Whatfor is the "http://localhost/api/scans/" API? why should I use it or not?
  • When calling curl -X GET http://localhost/api/projects/b957dc58-02f3-4e86-a1a6-943732046981/errors/ -H 'Content-Type: application/json' what does an error tell me. Is this some internal state or complete "success-status" of the project ? At moment it returns an empty json array.
  • When calling curl -X GET http://localhost/api/projects/b957dc58-02f3-4e86-a1a6-943732046981/summary/ -H 'Content-Type: application/json'
    I get {"error":"Summary file not available"}.
    How do I enable/configure a project to give me a project overall result (including all pipelines runs)?
    Does the summary list all unique licenses/authors/copyrights infos (number of each findings) and also some list for compliance_alert?

My intention is to scan a project and detect license conflicts. I assume that I can provide the policies.yml file which controls the outcome for compliance_alert, right?

@tdruez
Copy link
Member

tdruez commented Jul 23, 2021

Is there a way to upload inputs if a project already exists?

Not yet through the REST API, we need to add a new action for this.

Whatfor is the "http://localhost/api/scans/" API? why should I use it or not?

It's the legacy scanning system. It's replaced by the "scan_package" pipeline.

When calling curl -X GET http://localhost/api/projects/b957dc58-02f3-4e86-a1a6-943732046981/errors/ -H 'Content-Type: application/json' what does an error tell me. Is this some internal state or complete "success-status" of the project ? At moment it returns an empty json array.

Any error that occurs during a pipeline run can be logged in the Errors model. You can look into the message field to get more details about each error. For example:

"message": "ERROR: for scanner: licenses:\nERROR: Processing interrupted: timeout after 120 seconds.",

Is this some internal state or complete "success-status" of the project ? At moment it returns an empty json array.

There's no such state at a Project level, but rather at the Pipeline run level. The run of a Pipeline can have the following status:

  class Status(models.TextChoices):
        NOT_STARTED = "not_started"
        QUEUED = "queued"
        STARTED = "started"
        RUNNING = "running"
        SUCCESS = "success"
        FAILURE = "failure"

Note that the errors are not directly tied with the status of a pipeline run but are instead "event" that were logged during the run. For example: a file too large to be scanned, this is an error but we the scanning pipeline can be successful overall.

The run status field is available in the UI but is missing in the API, this is a mistake and I'll add it.
Screenshot 2021-07-23 at 4 16 58 PM

run status field added in the API in b8eeaaf

Screenshot 2021-07-23 at 4 28 02 PM

At moment it returns an empty json array.

There's no errors then :)

When calling curl -X GET http://localhost/api/projects/b957dc58-02f3-4e86-a1a6-943732046981/summary/ -H 'Content-Type: application/json' I get {"error":"Summary file not available"}.

You want to get the results action of your project to get the overall results. This include the full details about all pipeline runs. You can then get the data you care about from that results content.

My intention is to scan a project and detect license conflicts. I assume that I can provide the policies.yml file which controls the outcome for compliance_alert, right?

That's right, see https://scancodeio.readthedocs.io/en/latest/scancodeio-settings.html#scancodeio-policies-file for the policies setup.

@ddmesh
Copy link

ddmesh commented Jul 23, 2021

Hi again and again thanks for your immediate help.

So, to get any overall status I just need the request the result and traverse through the runs. I understand.

One more question: What for is:
curl -X GET http://localhost/api/projects/b957dc58-02f3-4e86-a1a6-943732046981/resources/ ?
It seams that it contains very similar information like the output "result". Is this obsolete? Or what can I do that I can't do with
result?

@ddmesh
Copy link

ddmesh commented Jul 23, 2021

When I build the project with docker-compose build I get the following message. It might be important?

Processing /opt/scancodeio
  DEPRECATION: A future pip version will change local packages to be built in-place without first copying to a temporary directory. We recommend you use --use-feature=in-tree-build to test your packages with this new behavior before it becomes the default.
   pip 21.3 will remove support for this functionality. You can find discussion regarding this at https://github.com/pypa/pip/issues/7555.

@tdruez
Copy link
Member

tdruez commented Jul 23, 2021

One more question: What for is:
curl -X GET http://localhost/api/projects/b957dc58-02f3-4e86-a1a6-943732046981/resources/ ?
It seams that it contains very similar information like the output "result". Is this obsolete? Or what can I do that I can't do with
result?

Each model have its own endpoint: run, resource, package, error.
The results are a collection of all those objects for a given project.

@tdruez
Copy link
Member

tdruez commented Jul 23, 2021

When I build the project with docker-compose build I get the following message. It might be important?

It's not a ScanCode.io warning but rather a pip thing. It is not important and future ScanCode.io version will include a newer version of pip.

@ddmesh
Copy link

ddmesh commented Jul 23, 2021

Each model have its own endpoint: run, resource, package, error.
The results are a collection of all those objects for a given project.

What do you mean with "model"? different pipeline-types?
When I compare the json from "resouces" against "result", I see that "result" contains a "file" section that seems to be same.

@tdruez
Copy link
Member

tdruez commented Jul 23, 2021

What do you mean with "model"?

https://scancodeio.readthedocs.io/en/latest/scanpipe-concepts.html

@ddmesh
Copy link

ddmesh commented Jul 23, 2021

Ah, I see. depending on what I scan (directory/files from codebase) or packages a different database-model is used.

I have just discovered that when I request "resources" for a project, I have to add the "format=json". But this would only give me
the first ten entries. I have to pass "page=xxx" in URL to see the other pages of this output.
So I think I do not need "resources" because all what I need can be found in "result", right?

@tdruez
Copy link
Member

tdruez commented Jul 23, 2021

So I think I do not need "resources" because all what I need can be found in "result", right?

Right, everything is in the results, the other endpoints are used for specific lookups, for example: retrieving the content of a given file to be displayed in the UI.

@ddmesh
Copy link

ddmesh commented Jul 27, 2021

Hi,

I currently get an "UTF-8" error when I specify the content-type for the following command (create project+upload).
The project is not created.

curl -X POST  http://localhost/api/projects/ \
     -F name="curl-$(date +'%d-%m-%Y_%H-%M-%S')" \
     -F upload_file=@/tmp/project.tgz \
     -F pipeline=scan_codebase \
     -F execute_now=true \
     -H 'Authorization:Token 5b508286d845f0180a5b0d3cc4e38e8a938e1c2e' \
     -H 'Content-Type: application/json'

{"detail":"JSON parse error - 'utf-8' codec can't decode byte 0x8b in position 299: invalid start byte"}

(I have enabled the authorization for testing).
When I remove the Content-Type header, the project is created and I get the correct json output of this command.

tdruez added a commit that referenced this issue Aug 6, 2021
Signed-off-by: Thomas Druez <tdruez@nexb.com>
tdruez added a commit that referenced this issue Aug 6, 2021
…pter #203

Signed-off-by: Thomas Druez <tdruez@nexb.com>
tdruez added a commit that referenced this issue Aug 9, 2021
* Add content to the REST API documentation chapter #203

Signed-off-by: Thomas Druez <tdruez@nexb.com>

* Add details and example for the REST API actions in documentation chapter #203

Signed-off-by: Thomas Druez <tdruez@nexb.com>

* Add minor updates to the REST API section

Signed-off-by: Hanan Younes <hyounes4560@conestogac.on.ca>

Co-authored-by: Hanan Younes <hyounes4560@conestogac.on.ca>
@tdruez
Copy link
Member

tdruez commented Sep 14, 2021

Is there a way to upload inputs if a project already exists?

@ddmesh a new action was add in the API to add inputs on existing projects in #318

Refer to https://scancodeio.readthedocs.io/en/latest/rest-api.html#add-input for documentation.

@Atharex
Copy link

Atharex commented Oct 3, 2021

Kind of returning to the original posters 1st question... About pulling images from private repositories where you need authentication.

We are running our own private Harbor registry, which is already doing a great job with vulnerability scanning, but I wanted to also incorporate your solution for continuous license scanning of our docker containers. (for now just a cron workflow, which could query an image from our Harbor instance directly from scancode. Most likely each time it would download the latest tag of an image and run license scanning on top of it in an existing project for that image )

As you are using skopeo as the backend for querying a remote docker registry, would it not be possible to enable passing username & password credentials for it as well?

Now I can of course create an external workflow to save our images as tarballs and upload them separately to scancode, but it would be so much better if this step could be omitted and scancode would download them directly.

@tdruez
Copy link
Member

tdruez commented Oct 4, 2021

Hi @Atharex here's an idea based on the skopeo credentials support, https://github.com/containers/skopeo#authenticating-to-a-registry.

You would define your credentials for a given registry domain in a new SKOPEO_CREDENTIALS setting, for example:

SKOPEO_CREDENTIALS = {
    "myregistrydomain.com": "user:password",
}

At fetch time, if the domain of the docker:// reference URL is defined in the SKOPEO_CREDENTIALS, those related credentials will be provided using the $ skopeo copy --src-creds=.

This would allow to support multiple registry sources and will not impact the current anonymous fetching.

@Atharex @pombredanne Let me know your take on that approach.

@Atharex
Copy link

Atharex commented Oct 4, 2021

@tdruez sounds good to me! +1

Though that limits scancode to only a single user per registry... For me that is not a problem, but just a note if someone has a use-case for that

@tdruez
Copy link
Member

tdruez commented Oct 4, 2021

Though that limits scancode to only a single user per registry... For me that is not a problem, but just a note if someone has a use-case for that

At the moment ScanCode.io is single user anyway. Authentication will be implemented soon and we'll be able to move such settings at the user level then.

Now, after discussion with @pombredanne we're thinking about using the authentication file instead of custom code, as suggested above, on the ScanCode.io side.
See https://github.com/containers/skopeo/blob/main/docs/skopeo-copy.1.md#options --authfile path and REGISTRY_AUTH_FILE.

@Atharex Any input on providing the location of your authfile in the ScanCode.io env?

@Atharex
Copy link

Atharex commented Oct 4, 2021

Well, considering I would like to run scancode in the end in kubernetes, I would like to have it loaded as a secret separately.

Though I guess in the current docker-compose environment, this is a bit more of a hassle... Either by extending the docker-compose to have an optional mount with this file, or have the file managed/created inside the app in a separate tab. Not really sure about any of those....

@tdruez
Copy link
Member

tdruez commented Oct 4, 2021

We have to consider shared spaces in a Docker context since others features, such as policies and custom pipelines, are depending on external/user-provided files.

@mjherzog mjherzog changed the title May a bit confused and multiple questions. May be a bit confused and multiple questions. Feb 26, 2022
@19csa55
Copy link

19csa55 commented Feb 14, 2024

@Atharex Hello, have you implemented private registry authorization in your scancode.io instance?

tdruez added a commit that referenced this issue Mar 4, 2024
Signed-off-by: tdruez <tdruez@nexb.com>
tdruez added a commit that referenced this issue Mar 4, 2024
Signed-off-by: tdruez <tdruez@nexb.com>
tdruez added a commit that referenced this issue Mar 4, 2024
Signed-off-by: tdruez <tdruez@nexb.com>
@tdruez tdruez closed this as completed Mar 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants