-
-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error on Screenshot , Location and Quality #43
Comments
Hmm, let me look into this. The URL Scan API isn't actually being used - I can push an update that will use that as a fallback, but I took it out a while ago as their API's pricing was quite steep (for the hosted instance, would be fine if you were self-hosting). Quality ErrorFor the Quality job, that uses Lighthouse + Page Speed Insights API. This is free, but needs to be enabled from the Cloud console.
Server Location ErrorThis task uses IP API, which is free and doesn't require any auth. If you send a GET request to I've not actually seen this job fail before, so would be interested to learn more. Screenshot ErrorThis works by spinning up a headless instance of Chromium locally, and using Pupeteer to control it. Looks like it couldn't find the Chromium exec in your VM/ system. If you've got it installed to a non-standard location, you can try setting the More debugging: |
Server Location Error I've not actually seen this job fail before, so would be interested to learn more. Thanks. this one confirmed to be working. Quality Error
For the Screenshot, `version: "3" services: I can see from console that there seems to be chromium. But not sure why its failing for me. Does this mean i need to create a chromium part when I deploy this on the stack? Thanks again. |
Hmmm, I'll need to look into this, Chromium should be installed in the Dockerfile, I thought that it was maybe a permissions thing, but that doesn't seem to be the case. If anyone has a bit more insight as to why the jobs that use Pupeteer cannot find Chromium when running in Docker, that'd be helpful :) |
@Lissy93 While Chromium is installed it does not allow for a sandbox within Docker.
Just added PR #51, however it probably needs a check to only set the flag if run inside docker. |
更改dockerfile文件: 设置构建参数,指定要使用的Node.js版本ARG NODE_VERSION=16 设置构建参数,指定要使用的Debian版本,默认为"bullseye"ARG DEBIAN_VERSION=bullseye 使用Node.js官方的Docker镜像作为基础镜像,选择特定版本的Node.js和Debian版本FROM docker.io/library/node:${NODE_VERSION}-${DEBIAN_VERSION} 设置容器的默认shell为Bash,并启用一些选项SHELL ["/bin/bash", "-euo", "pipefail", "-c"] 安装Chromium浏览器RUN apt-get update -qq && 下载并验证Google Chrome的签名密钥wget --quiet --output-document=- https://dl-ssl.google.com/linux/linux_signing_key.pub | gpg --dearmor > /etc/apt/trusted.gpg.d/google-archive.gpg && \ 将Google Chrome的存储库源添加到apt源列表sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' && \ 使用apt-get安装Chromium浏览器,并在完成后清理apt缓存apt-get -qqy --no-install-recommends install chromium traceroute && 运行Chromium浏览器的版本命令,并将其输出重定向到/etc/chromium-version文件RUN /usr/bin/chromium --no-sandbox --version > /etc/chromium-version 设置工作目录为/appWORKDIR /app 复制package.json和yarn.lock到工作目录COPY package.json yarn.lock ./ 运行yarn install安装依赖,并清除yarn缓存RUN yarn install && 复制所有文件到工作目录COPY . . 运行yarn build构建应用RUN yarn build 暴露容器端口,默认为3000,可通过环境变量PORT修改EXPOSE ${PORT:-3002} 设置环境变量CHROME_PATH以指定Chromium二进制文件的路径ENV CHROME_PATH='/usr/bin/chromium' 定义容器启动时执行的命令,启动Node.js应用的server.jsCMD [ "node", "server.js" ] |
quality.js 使用axios请求会有问题 改为https将会解决: // 处理函数,用于获取指定网页的性能分析数据 const response = await getGooglePageSpeedInsights(url); return response; // 返回获取的性能分析数据 // 导出中间件处理函数 // 导出处理函数,以便其他模块可以直接使用 |
Updates the Dockerfile with changes suggested by @GWnbsp in #43 (comment) ### Summary of Changes 1. **ARG Statements:** Introduced `ARG` statements for Node.js and Debian versions, making the Dockerfile more customizable. 2. **SHELL Command:** Changed the default shell to Bash with certain options enabled, improving robustness. 3. **Chromium Installation:** Updated Chromium installation to use Google's signing keys and repositories, aiming for more secure and up-to-date packages. 4. **Chromium Version:** Added a step to save Chromium's version into `/etc/chromium-version` for reference. 5. **Directory Creation:** Added a new directory /app/data in the container's filesystem. 6. **CMD Change:** Changed the CMD to start Node.js server (server.js) instead of using yarn serve. 7. **General Cleanup and Comments:** Code has been refactored for better readability and detailed comments have been added for clarity. 8. **Dependency Installation:** Kept yarn install and the removal of the Yarn cache, but the command is more streamlined. 9. **Other Minor Changes:** - Added flags like `-qq` and `--no-install-recommends` for quieter and optimized installation. - Enhanced cleanup with `rm -rf /var/lib/apt/lists/*.`
Thanks for the help on the docker. It was working and was able to get so many information on the site using this tool. It was awesome!
Today, I saw an added API entry for urlscan.io and has a "screenshot" capabilities.
So far out of the box, I loaded all API keys except for GOOGLE_CLOUD_API_KEY but i got this error on "screenshot"
Error Details for screenshot
The screenshot job failed with an error state after 16405 ms. The server responded with the following error:
Failed to launch the browser process!
[146:146:0827/000910.219594:ERROR:browser_main_loop.cc(536)] Failed to open an X11 connection.
[146:146:0827/000910.221268:ERROR:browser_main_loop.cc(1386)] Unable to open X display.
TROUBLESHOOTING: https://pptr.dev/troubleshooting
Also, I went to the urlscan.io API to see if it used up the API - for far did not see anything used up.
Another inquiry, on using the API for urlscan.io. Does it use the API and run on public or private scan? It will be super awesome if the default behavior is to do a scan on PRIVATE to avoid posting the URL on the Urlscan.io page itself.
Error Details for location
The location job failed with an error state after 9 ms. The server responded with the following error:
Failed to fetch
Error Details for Quality
The quality job failed with an error state after 345 ms. The server responded with the following error:
No Data
The text was updated successfully, but these errors were encountered: