Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kernel] Add fmemalloc sys call, fix fsck on 65M disks #1314

Merged
merged 2 commits into from
Jun 10, 2022
Merged

[kernel] Add fmemalloc sys call, fix fsck on 65M disks #1314

merged 2 commits into from
Jun 10, 2022

Conversation

ghaerr
Copy link
Owner

@ghaerr ghaerr commented Jun 10, 2022

Adds _fmemalloc system call to allow processes to allocate far memory for themselves. Memory is automatically freed at process exit.
Adds fmemalloc wrapper and fmemset C library routines.
Enhances fsck to use far memory (up to 64K), which now allows working on max sized (65M) hard disks.
Rearranges some multiply-included kernel header files.

Fixes fsck as requested in #1312.

@ghaerr ghaerr merged commit 41a2ed0 into ghaerr:master Jun 10, 2022
@ghaerr ghaerr deleted the fsck branch June 10, 2022 02:59
@Mellvik
Copy link
Contributor

Mellvik commented Jun 10, 2022 via email

@ghaerr
Copy link
Owner Author

ghaerr commented Jun 10, 2022

This is close to incredible. Not only did we go from impossible to 'done' in almost no time (fsck)

Thanks, I had forgotten that I had previously thought about using far memory buffers for this the last time I looked. It turned out to be pretty straightforward after that.

If I understand it correctly, this is What we used to call upper memory, available on most system post-XT. I can't wait to see what it may do for commands like dd (with raw devices ... ), tar, compress, and more,

No - not quite. For the first version, I hardcoded 0xD000, which is the segment address of the upper memory portion of PC memory. However, this final version just allows any ELKS application program to ask for memory OUTSIDE it's own address space. The current implementation gets memory from the kernel memory manager, which allocates from main memory, which does not include upper memory. We have been talking about adding upper memory to the main memory allocation routine, and that would help for systems that have little memory, but other than that won't speed up application programs. This upper memory is also already usable from RAM disks on ELKS, set in config.

If you think we should add the capability to use upper memory, I'll add that to the list.

@ghaerr
Copy link
Owner Author

ghaerr commented Jun 10, 2022

However, our new fmemalloc routine would allow for ELKS programs to allocate any amount of available (linear) main memory for their own use, allowing, for instance 64K - 256K buffers for dd. But, since the low-level disk I/O is only performed in 1K chunks, I don't really think this will help for any I/O bottlenecks. It does allow for programs with much larger data requirements to be written, which would use the char __far * type to access far data. Due to compiler limitations though, the data is only accessible in 64k segments, but that's not too hard to handle.

@Mellvik
Copy link
Contributor

Mellvik commented Jun 11, 2022 via email

@ghaerr
Copy link
Owner Author

ghaerr commented Jun 11, 2022

the improvement is more memory available per process, while still inside the 640k range, right?

Yes.

yanking compress to 16 bits from 12, even though it will be slow.

Yes, that could probably be done now by allocating a far (char __far *) buffer.

Run kilo decently (maybe?).

No, the problem with kilo is that it redraws the screen whenever an arrow key is pressed. I did realize we have mined (MINIX editor) ported to ELKS. It is MINIX's screen editor. @toncho11 will probably find it superior. Perhaps we should rename it 'edit' so that it is obvious what it is.

if it could free up some low memory for the kernel so the # of processes can be increased.

Unfortunately, fmemalloc won't free up any kernel data segment memory, which is tight. The max process boundary could be increased by allocating task structures dynamically, which could help you. There are some issues going that route, which we can discuss, but it might allow for 5-6 more processes or more, than the current 16 before kernel memory runs out again. Another issue you're likely having with 3 serial lines open is the large 1K receive buffer which is by default allocated to each. This comes out of kernel memory also, and ultimately would compete with the number of tasks available if dynamically allocated.

@Mellvik
Copy link
Contributor

Mellvik commented Jun 11, 2022 via email

@ghaerr
Copy link
Owner Author

ghaerr commented Jun 11, 2022

I always forget the tar option to set 12 bit compression and have to do it all over again.

I'm not sure we want to automatically default to 16-bit compression for ELKS, especially because there may not always be memory for it.

Moving to 12-bit default (rather than a separate option for it), might be a good idea though.

What exactly is your issue, are you talking about running on ELKS, or host, and is it a tar option, or the compress -b 12 you forget? Perhaps these could be defaulted so that things work more as expected.

Now, if there is (dynamic) competition for the resource, the user (me in this case) would be able to prioritize: Ditch a serial line when more processes are required.

Exactly. The further issue is that since the kernel would be allocating a relatively large structure from the heap (~800 bytes for each task struct, 1K bytes for serial), when/if the kernel heap gets fragmented, we could potentially permanently lose the ability to allocate the required structure. This should only be an issue with a lot of longer-running processes though.

@Mellvik
Copy link
Contributor

Mellvik commented Jun 11, 2022 via email

@ghaerr
Copy link
Owner Author

ghaerr commented Jun 11, 2022

The problem is always TO ELKS. When untaring, tar will use the right bit length automatically if supported by the host system.

I see. So the ability for compress -d on ELKS to decompress a 16-bit compressed file is quite useful. I'll add that to my list to see how compress on ELKS might be enhanced using fmemalloc. Thanks!

@Mellvik
Copy link
Contributor

Mellvik commented Jun 11, 2022 via email

@Mellvik
Copy link
Contributor

Mellvik commented Jun 12, 2022 via email

@ghaerr
Copy link
Owner Author

ghaerr commented Jun 12, 2022

it turns out that -a is negated by -r, which explains what I thought was erratic behaviour. -ra is different from -ar because -r turns off -a.
I suggest this negation be removed from -r.

Ok - quite confusing, since -ra is not the same as -ar.

Whether -a should continue to imply -r is a different story. Ideally it should not, but option processing becomes easier that way

I think -a should be left alone - it appears that the original intent was to either run fsck with -a or with -r, and some folks may continue to think that way, which will still work.

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants