New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Installer fails on Librem 13v2 with AttributeError #3050

Closed
michas2 opened this Issue Aug 27, 2017 · 21 comments

Comments

Projects
None yet
9 participants
@michas2

michas2 commented Aug 27, 2017

Qubes OS version (e.g., R3.2):

R4.0-rc1

Affected TemplateVMs (e.g., fedora-23, if applicable):

Installer / anaconda


Expected behavior:

It should not give an error message. :)
It should be able to report errors.

Actual behavior:

As soon as the graphical installer tries to ask for the language it pops up an python error: "'str' object has no attribute 'name'"

It offers to send a bug report. However it is not trivial to set up networking at this stage of the installer.
After manually setting up networking the installer tries to report the bug to bugzilla.redhat.com but fails because it uses a 'Generic' product with is not available there.

Steps to reproduce the behavior:

Use a Librem 13v2 boot from an USB key containing Qubes R4.0-rc1.

General notes:

Qubes R3.2 installer works fine on that laptop.
Qubes R4.0-rc1 installer works fine on a different laptop.


Related issues:

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 27, 2017

Member
Member

marmarek commented Aug 27, 2017

@michas2

This comment has been minimized.

Show comment
Hide comment
@michas2

michas2 Aug 27, 2017

Sorry, forgot the details.
Here they are:

07:11:04,582 DEBUG anaconda: running handleException
07:11:04,583 CRIT anaconda: Traceback (most recent call last):

  File "/usr/lib64/python3.5/site-packages/pyanaconda/threads.py", line 251, in run
    threading.Thread.run(self, *args, **kwargs)

  File "/usr/lib64/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/osinstall.py", line 1175, in storage_initialize
    storage.reset()

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/blivet.py", line 271, in reset
    self.devicetree.populate(cleanup_only=cleanup_only)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 451, in populate
    self._populate()

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 518, in _populate
    self.handle_device(dev)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 318, in handle_device
    self.handle_format(info, device)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 345, in handle_format
    helper_class(self, info, device).run()

  File "/usr/lib/python3.5/site-packages/blivet/populator/helpers/formatpopulator.py", line 89, in run
    self.device.format = formats.get_format(type_spec, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/formats/__init__.py", line 98, in get_format
    fmt = fmt_class(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/formats/fs.py", line 125, in __init__
    self.update_size_info()

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/formats/fs.py", line 297, in update_size_info
    result = self._minsize.do_task()

  File "/usr/lib/python3.5/site-packages/blivet/tasks/fsminsize.py", line 131, in do_task
    raise FSError("failed to get block size for %s filesystem on %s" % (self.fs.mount_type, self.fs.device.name))

AttributeError: 'str' object has no attribute 'name'

anaconda-2017-08-27-08:48:15.249830-1340.tar.gz

michas2 commented Aug 27, 2017

Sorry, forgot the details.
Here they are:

07:11:04,582 DEBUG anaconda: running handleException
07:11:04,583 CRIT anaconda: Traceback (most recent call last):

  File "/usr/lib64/python3.5/site-packages/pyanaconda/threads.py", line 251, in run
    threading.Thread.run(self, *args, **kwargs)

  File "/usr/lib64/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/osinstall.py", line 1175, in storage_initialize
    storage.reset()

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/blivet.py", line 271, in reset
    self.devicetree.populate(cleanup_only=cleanup_only)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 451, in populate
    self._populate()

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 518, in _populate
    self.handle_device(dev)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 318, in handle_device
    self.handle_format(info, device)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/populator/populator.py", line 345, in handle_format
    helper_class(self, info, device).run()

  File "/usr/lib/python3.5/site-packages/blivet/populator/helpers/formatpopulator.py", line 89, in run
    self.device.format = formats.get_format(type_spec, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/formats/__init__.py", line 98, in get_format
    fmt = fmt_class(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/formats/fs.py", line 125, in __init__
    self.update_size_info()

  File "/usr/lib/python3.5/site-packages/blivet/threads.py", line 45, in run_with_lock
    return m(*args, **kwargs)

  File "/usr/lib/python3.5/site-packages/blivet/formats/fs.py", line 297, in update_size_info
    result = self._minsize.do_task()

  File "/usr/lib/python3.5/site-packages/blivet/tasks/fsminsize.py", line 131, in do_task
    raise FSError("failed to get block size for %s filesystem on %s" % (self.fs.mount_type, self.fs.device.name))

AttributeError: 'str' object has no attribute 'name'

anaconda-2017-08-27-08:48:15.249830-1340.tar.gz

@kakaroto

This comment has been minimized.

Show comment
Hide comment
@kakaroto

kakaroto Aug 28, 2017

Yep, I had the exact same issue, it's a known/old/unfixed issue reported to fedora here : https://bugzilla.redhat.com/show_bug.cgi?id=1400840
From what I could see, it's a bug in the python3-blivet package, it tries to print self.fs.name variable instead of self.fs
I entered the debugger and found that "self.fs.device" is the actual string, and in my case, it was "/dev/sda1", and it looks like for some reason it doesn't like that device since it can't get its block size, which is what causes the error to be printed which caused the crash.
In my case, I have two SSDs (2.5" and M.2), and I wanted to install Qubes on one of the two, i'm sure that if I delete partitions or maybe reformat /dev/sda1, the error will disappear, but I have a system installed on that disc already which I do not want to overwrite as I was planning on installing qubes on the other disc, and I really don't feel like opening up the laptop to remove the 2.5" SSD just to be able to install Qubes on the M.2 SSD before I can put back the other drive into the laptop..

Yep, I had the exact same issue, it's a known/old/unfixed issue reported to fedora here : https://bugzilla.redhat.com/show_bug.cgi?id=1400840
From what I could see, it's a bug in the python3-blivet package, it tries to print self.fs.name variable instead of self.fs
I entered the debugger and found that "self.fs.device" is the actual string, and in my case, it was "/dev/sda1", and it looks like for some reason it doesn't like that device since it can't get its block size, which is what causes the error to be printed which caused the crash.
In my case, I have two SSDs (2.5" and M.2), and I wanted to install Qubes on one of the two, i'm sure that if I delete partitions or maybe reformat /dev/sda1, the error will disappear, but I have a system installed on that disc already which I do not want to overwrite as I was planning on installing qubes on the other disc, and I really don't feel like opening up the laptop to remove the 2.5" SSD just to be able to install Qubes on the M.2 SSD before I can put back the other drive into the laptop..

@arvog

This comment has been minimized.

Show comment
Hide comment
@arvog

arvog Aug 28, 2017

I can confirm that reformatting and setting default partitioning during install yields no error message on a Librem 13v2 (with one M.2 in my case).

arvog commented Aug 28, 2017

I can confirm that reformatting and setting default partitioning during install yields no error message on a Librem 13v2 (with one M.2 in my case).

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 29, 2017

Member

What is "/dev/sda1" in that failed case? LVM? Or some filesystem (which one)? What is partition table there (mbr or gpt)?

Member

marmarek commented Aug 29, 2017

What is "/dev/sda1" in that failed case? LVM? Or some filesystem (which one)? What is partition table there (mbr or gpt)?

@darcy

This comment has been minimized.

Show comment
Hide comment
@darcy

darcy Aug 30, 2017

@marmarek this is new territory for me, but have some details that might help. I was able to ctrl-Tab to a console from the debugger and run a few commands (typed in manually, so forgive any errors):

# fdisk -l /dev/sda
Disk /dev/sda: 465.8 GiB [...]
Disklabel type: dos

Device    Boot    Start    End    Sectors    Size  Id Type
/dev/sda1        2048   4102143   4100096     2G 83 Linux  
/dev/sda2  *  4102144   5664767   1562624   763M 83 Linux
/dev/sda3     5666814 976771071 971104258 463.1G  5 Extended
/dev/sda4     5666816 976771071 971104256 463.1G 83 Linux
# parted /dev/sda print
Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End    Size    Type      File system  Flags
 1      1049kB  2100MB  2099MB   primary   ext4
 2      2100MB   2100MB  800GB   primary   ext4         boot               
 3      2901MB   500GB  497GB   extended
 5      2901MB   500GB  497GB   logical

In the debugger the following values displayed when set:

(Pdb) self.fs 
*** AttributeError: 'Ext4FS' object has no attribute 'name'

(Pdb) self.fs.device
'/dev/sda1'

(Pdb) self.mount_type
'ext4'

(Pdb) self.fs.name
'ext4'

(Pdb) self._extract_block_size()
[None]

(Pdb) self._current_info
[None]

(Pdb) self._get_resize_info()
'Estimated minimum size of the filesystem: 828165\n' 

darcy commented Aug 30, 2017

@marmarek this is new territory for me, but have some details that might help. I was able to ctrl-Tab to a console from the debugger and run a few commands (typed in manually, so forgive any errors):

# fdisk -l /dev/sda
Disk /dev/sda: 465.8 GiB [...]
Disklabel type: dos

Device    Boot    Start    End    Sectors    Size  Id Type
/dev/sda1        2048   4102143   4100096     2G 83 Linux  
/dev/sda2  *  4102144   5664767   1562624   763M 83 Linux
/dev/sda3     5666814 976771071 971104258 463.1G  5 Extended
/dev/sda4     5666816 976771071 971104256 463.1G 83 Linux
# parted /dev/sda print
Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sda: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End    Size    Type      File system  Flags
 1      1049kB  2100MB  2099MB   primary   ext4
 2      2100MB   2100MB  800GB   primary   ext4         boot               
 3      2901MB   500GB  497GB   extended
 5      2901MB   500GB  497GB   logical

In the debugger the following values displayed when set:

(Pdb) self.fs 
*** AttributeError: 'Ext4FS' object has no attribute 'name'

(Pdb) self.fs.device
'/dev/sda1'

(Pdb) self.mount_type
'ext4'

(Pdb) self.fs.name
'ext4'

(Pdb) self._extract_block_size()
[None]

(Pdb) self._current_info
[None]

(Pdb) self._get_resize_info()
'Estimated minimum size of the filesystem: 828165\n' 

@darcy

This comment has been minimized.

Show comment
Hide comment
@darcy

darcy Aug 30, 2017

Here are the steps I used to reformat the drive to get up and running:

parted /dev/sda mklabel gpt
parted -a opt /dev/sda mkpart primary ext4 0% 100%
mkfs.ext4 -L datapartition /dev/sda1

darcy commented Aug 30, 2017

Here are the steps I used to reformat the drive to get up and running:

parted /dev/sda mklabel gpt
parted -a opt /dev/sda mkpart primary ext4 0% 100%
mkfs.ext4 -L datapartition /dev/sda1
@kakaroto

This comment has been minimized.

Show comment
Hide comment
@kakaroto

kakaroto Aug 30, 2017

Pretty much the same fdisk/parted results for me, and yes, the existing OS was encrypted, but I think it's irrelevant, the /dev/sda1 was not mounted and I can't mount it either (bad superblock), if I hexdump it, I see this kind of stuff :

00000480  00 00 00 00 00 00 00 00  2f 76 61 72 2f 6c 69 62  |......../var/lib|
00000490  2f 70 75 72 65 6f 73 2d  6f 65 6d 2f 74 61 72 67  |/pureos-oem/targ|
000004a0  65 74 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |et..............|

and :

08808000  01 43 44 30 30 31 01 00  20 20 20 20 20 20 20 20  |.CD001..        |
08808010  20 20 20 20 20 20 20 20  20 20 20 20 20 20 20 20  |                |
08808020  20 20 20 20 20 20 20 20  50 75 72 65 4f 53 20 38  |        PureOS 8|
08808030  2e 30 20 4c 69 76 65 20  20 20 20 20 20 20 20 20  |.0 Live         |

So, it looks to me like this partition is probably the one used by the OEM installer for the first install, and it's meant to keep the copy of the pureos installer ISO while the user does the initial install after receiving their machine. I think it might also have been corrupted (either on purpose to avoid booting into it, or as a bug from the OEM installer), which is why it's not mountable and why the Qubes installer complains about it.

You can find the latest OEM images here : https://downloads.puri.sm/oem/2017-08-20/
It should allow others to reproduce the issue and confirm/fix the problem.
(note, booting into that iso will automatically delete your partitions and install itself on (potentially all) your hard drive(s), I don't think there is any "press enter to confirm" prompt, so don't run it on a system that has any data you're not ready to lose).

Pretty much the same fdisk/parted results for me, and yes, the existing OS was encrypted, but I think it's irrelevant, the /dev/sda1 was not mounted and I can't mount it either (bad superblock), if I hexdump it, I see this kind of stuff :

00000480  00 00 00 00 00 00 00 00  2f 76 61 72 2f 6c 69 62  |......../var/lib|
00000490  2f 70 75 72 65 6f 73 2d  6f 65 6d 2f 74 61 72 67  |/pureos-oem/targ|
000004a0  65 74 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |et..............|

and :

08808000  01 43 44 30 30 31 01 00  20 20 20 20 20 20 20 20  |.CD001..        |
08808010  20 20 20 20 20 20 20 20  20 20 20 20 20 20 20 20  |                |
08808020  20 20 20 20 20 20 20 20  50 75 72 65 4f 53 20 38  |        PureOS 8|
08808030  2e 30 20 4c 69 76 65 20  20 20 20 20 20 20 20 20  |.0 Live         |

So, it looks to me like this partition is probably the one used by the OEM installer for the first install, and it's meant to keep the copy of the pureos installer ISO while the user does the initial install after receiving their machine. I think it might also have been corrupted (either on purpose to avoid booting into it, or as a bug from the OEM installer), which is why it's not mountable and why the Qubes installer complains about it.

You can find the latest OEM images here : https://downloads.puri.sm/oem/2017-08-20/
It should allow others to reproduce the issue and confirm/fix the problem.
(note, booting into that iso will automatically delete your partitions and install itself on (potentially all) your hard drive(s), I don't think there is any "press enter to confirm" prompt, so don't run it on a system that has any data you're not ready to lose).

@arvog

This comment has been minimized.

Show comment
Hide comment
@arvog

arvog Jan 18, 2018

@ksoona For what I know VT-d is not yet enabled in the latest Librem custom coreboot. You cannot do anything about that, unless you want to help @kakaroto with the programming work on this. Of course you can still install the Qubes 4.0 rcs (at least for me it worked) and maybe also the upcoming final, but until there is a VT-d enabled coreboot version for your Librem, I do not think you should consider your Qubes 4 installation to be safe.

arvog commented Jan 18, 2018

@ksoona For what I know VT-d is not yet enabled in the latest Librem custom coreboot. You cannot do anything about that, unless you want to help @kakaroto with the programming work on this. Of course you can still install the Qubes 4.0 rcs (at least for me it worked) and maybe also the upcoming final, but until there is a VT-d enabled coreboot version for your Librem, I do not think you should consider your Qubes 4 installation to be safe.

@darcy

This comment has been minimized.

Show comment
Hide comment
@darcy

darcy Jan 18, 2018

@ksoona VT-d is required since 4.0rc2, so you can only install rc1 on Librem13v2 until Coreboot is updated to include it. There is no ETA on this at the moment: https://tracker.pureos.net/T179

You can run 3.2 on it, but suspend/resume doesn't work.

darcy commented Jan 18, 2018

@ksoona VT-d is required since 4.0rc2, so you can only install rc1 on Librem13v2 until Coreboot is updated to include it. There is no ETA on this at the moment: https://tracker.pureos.net/T179

You can run 3.2 on it, but suspend/resume doesn't work.

@jonkri

This comment has been minimized.

Show comment
Hide comment
@jonkri

jonkri Feb 18, 2018

Things are happening.

Purism will announce on their blog when an update to Coreboot 4.7 is available. This update will make it possible to install the latest version of Qubes (on both TPM and non-TPM laptops). 😃

jonkri commented Feb 18, 2018

Things are happening.

Purism will announce on their blog when an update to Coreboot 4.7 is available. This update will make it possible to install the latest version of Qubes (on both TPM and non-TPM laptops). 😃

@kakaroto

This comment has been minimized.

Show comment
Hide comment
@kakaroto

kakaroto Feb 19, 2018

@jonkri note that with the latest rc4 qubes installer, I still got that same error. And coreboot 4.7 is irrelevant in this case as it won't fix the bug in the installer.

From what I could see, it's a bug in the python3-blivet package, it tries to print self.fs.name variable instead of self.fs

@jonkri note that with the latest rc4 qubes installer, I still got that same error. And coreboot 4.7 is irrelevant in this case as it won't fix the bug in the installer.

From what I could see, it's a bug in the python3-blivet package, it tries to print self.fs.name variable instead of self.fs

@honzahosek

This comment has been minimized.

Show comment
Hide comment
@honzahosek

honzahosek Feb 23, 2018

I got similar traceback as @michas2 right after Qubes R4.0-rc4 installer start. Also on Librem 13v2. It did not happen when I removed the hard disk with PureOS installed on it. I think the bug is fixed here: storaged-project/blivet@b4407b2#diff-4f946d685eb25b56e6b3e57fcc5c00d8

I got similar traceback as @michas2 right after Qubes R4.0-rc4 installer start. Also on Librem 13v2. It did not happen when I removed the hard disk with PureOS installed on it. I think the bug is fixed here: storaged-project/blivet@b4407b2#diff-4f946d685eb25b56e6b3e57fcc5c00d8

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Feb 23, 2018

Member

But the AttributeError is in the code reporting an error. So, when we fix one, we'll get another (not saying it shouldn't be fixed obviously). Can you post the PureOS partition layout so I could replicate the problem?

Member

marmarek commented Feb 23, 2018

But the AttributeError is in the code reporting an error. So, when we fix one, we'll get another (not saying it shouldn't be fixed obviously). Can you post the PureOS partition layout so I could replicate the problem?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Feb 23, 2018

Member

Eh, I should read more than last 3 comments before writing mine. I see @darcy already provided partition layout and @kakaroto the link to OEM images...

Member

marmarek commented Feb 23, 2018

Eh, I should read more than last 3 comments before writing mine. I see @darcy already provided partition layout and @kakaroto the link to OEM images...

@kakaroto

This comment has been minimized.

Show comment
Hide comment
@kakaroto

kakaroto Feb 23, 2018

Yep, that looks like the fix in blivet, it just needs to be integrated into qubes installer now. And yes, that will not prevent the error due to the partition,but it will prevent a crash of the installer. I don't know what will happen, but I expect a popup or something that says "unknown partition type" somewhere, definitely a recoverable error.

Yep, that looks like the fix in blivet, it just needs to be integrated into qubes installer now. And yes, that will not prevent the error due to the partition,but it will prevent a crash of the installer. I don't know what will happen, but I expect a popup or something that says "unknown partition type" somewhere, definitely a recoverable error.

marmarek added a commit to marmarek/qubes-installer-qubes-os that referenced this issue Feb 24, 2018

marmarek added a commit to marmarek/qubes-installer-qubes-os that referenced this issue Feb 24, 2018

@marmarek

This comment has been minimized.

Show comment
Hide comment
Member

marmarek commented Feb 24, 2018

That was it. Before applying the fix: https://openqa.qubes-os.org/tests/93
After: https://openqa.qubes-os.org/tests/112

marmarek added a commit to marmarek/openqa-tests-qubesos that referenced this issue Feb 24, 2018

Test for installing over existing system
Simulate installation over existing PureOS installation, which have:
 - ext4 unencrypted /boot
 - encrypted (LUKS) system partition
 - broken filesystem on "recovery" partition

The last one cause anaconda crash: QubesOS/qubes-issues#3050
@kakaroto

This comment has been minimized.

Show comment
Hide comment
@kakaroto

kakaroto Feb 25, 2018

Awesome, thanks!

Awesome, thanks!

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Feb 28, 2018

Member

BTW if you want to test it yourself, you can either build iso, or download iso from the above linked successful openQA test ("Logs & Assets" tab). That's new part of our testing infrastructure and we're still evaluating possibilities. Note that the iso image there is intentionally not signed - use it for testing only.

Member

marmarek commented Feb 28, 2018

BTW if you want to test it yourself, you can either build iso, or download iso from the above linked successful openQA test ("Logs & Assets" tab). That's new part of our testing infrastructure and we're still evaluating possibilities. Note that the iso image there is intentionally not signed - use it for testing only.

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot Mar 4, 2018

Automated announcement from builder-github

The package pykickstart-2.32-4.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

Automated announcement from builder-github

The package pykickstart-2.32-4.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot Mar 12, 2018

Automated announcement from builder-github

The package pykickstart-2.32-4.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Automated announcement from builder-github

The package pykickstart-2.32-4.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment