Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative download site for current Textadept files #489

Closed
oOosys opened this issue Nov 11, 2023 · 2 comments
Closed

Alternative download site for current Textadept files #489

oOosys opened this issue Nov 11, 2023 · 2 comments

Comments

@oOosys
Copy link

oOosys commented Nov 11, 2023

I am currently on a slow (7 kByte/s) Internet connection and github.com fails to download any content larger than appr. 1.2MByte. The same problem has also git clone failing after 12% of download and restarting always from zero again.

Is there an alternative download site supporting "resume" while downloading so that I can have access to the most current Textadept files?

@oOosys
Copy link
Author

oOosys commented Nov 11, 2023

The issue with github seems to be that for unknown reason the Internet connection gets lost while downloading files larger than 700 kByte up to 1.2 MByte on a slow connection, so it is neither possible to get the master .zip file of the repository nor a release download.

A work around that currently works for me is to use the github api to download the repository one single file after another (assuming they are each smaller then 1.2 MByte) instead of trying to get its entire content as one file as it is also the case when using git clone:

import hashlib, os, requests

githubUser				= "orbitalquark"
githubUserRepo		= "textadept"
userHOMEdir			= os.path.expanduser("~")

githubRepoContentURL	= f"https://api.github.com/repos/{githubUser}/{githubUserRepo}/contents"
baseDownloadDir		= f"{userHOMEdir}/Downloads/githubUser_{githubUser}__Repo_{githubUserRepo}"
os.system(f"mkdir {baseDownloadDir}")

def getAndSaveFilesListedInURLresponse(url): 
	print(f"	===###===###=== {url=}	{baseDownloadDir=} ")
	objURLresponse = requests.get(url)
	lstObjURLjson = objURLresponse.json()
	print(f" {objURLresponse=}	{lstObjURLjson=}")
	print("------------------------------------------")
	for dct in lstObjURLjson:
		print(f"	>>>IF>>> {dct['type']=}:	{dct=}")
		if dct["type"] == "dir":
			newURL = dct["_links"]["self"]
			newDownloadDir = baseDownloadDir+"/"+dct["path"]
			os.system(f"mkdir {newDownloadDir}")
			print(f"	=== >>> {newURL=}	CREATED {newDownloadDir=} recursionCall() ")
			getAndSaveFilesListedInURLresponse(newURL)
			continue
		if dct["type"] == "file":
			downloadURL		= dct["download_url"]
			fileName 			= dct["name"]
			fullPathFileName	= baseDownloadDir+"/"+dct["path"]
			objURLresponse	= requests.get(downloadURL)
			print(f"	type==file size={dct['size']}: {objURLresponse.status_code=} {fileName=} {downloadURL=} {fullPathFileName=}")
			if objURLresponse.status_code == 200:
				#  Save the content to a file
				with open(fullPathFileName, 'wb') as fwb:
					fwb.write(objURLresponse.content)
				print("	got:	"+fullPathFileName)
				# github prepends the file content with fb"blob {fileSize}\x00" before calculating sha1 
				githubSHA1prefix = b"blob " + f"{len(objURLresponse.content)}".encode("utf-8") + b"\x00"
				print("		with sha= " + hashlib.sha1(githubSHA1prefix + objURLresponse.content).hexdigest())
				# print(response.content.decode('utf-8')) # Print without escaping non-printable bytes
			else:
				print(f"	statusCode: "+str(response.status_code)+")	retrieving:	"+fullPathFileName)
			print(" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~")
			
getAndSaveFilesListedInURLresponse(githubRepoContentURL)

def getGithubSHAofFile(fpFName):
	with open(fpFName, 'rb') as frb:
		fileContent = frb.read()
	githubSHA1prefix = b"blob " + f"{len(fileContent)}".encode("utf-8") + b"\x00"
	sha1sum = hashlib.sha1(githubSHA1prefix + fileContent).hexdigest()
	return sha1sum
	""" bash ( next line behaves in dash other way than in bash )
	~ $ f=/full/path/file/name.ext
	~ $ echo -e "blob $(stat -c '%s' $f)\0$(cat $f)" | sha1sum
	"""

"""
f=baseDownloadDir+"/"+".clang-format"
sha1sum = getGithubSHAofFile(f)
print(type(sha1sum), sha1sum)
"""

It would be sure nice to have another repository with resumable downloads, but at the moment the Python script I have put together for this purpose above runs and downloads the repositories on my slow connection. Issue solved?

@orbitalquark
Copy link
Owner

Sorry, I don't know of any mirrors. My ISP blocks access to SourceForge, so I set up a GitHub mirror of a SourceForge project that uses GitHub Actions to pull in new changes every night so I can use that project. Perhaps you could do something similar using another git hosting service. Perhaps your clever workaround is enough though.

I'm going to close this because I'm not going to set up a mirror and GitHub network issues are out of my control.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants