Provide x86-64-v4 #18

Closed
opened 2021-07-09 22:51:30 +02:00 by anonfunc · 36 comments
Owner

I plan to provide x86-64-v4 in addition to already existing v3.

This is currently blocked by me not having access to a single machine capable of x86-64-v4.

If my quick research is correct, AVX512 (and therefore x86-64-v4) is at the time of writing only supported by Intel CPUs i5 Gen11 or higher.

All considerations of #17 should apply here as well.

I plan to provide **x86-64-v4** in addition to already existing v3. This is currently blocked by me not having access to a single machine capable of **x86-64-v4**. If my quick research is correct, **AVX512** (and therefore x86-64-v4) is at the time of writing only supported by *Intel CPUs i5 Gen11* or higher. All considerations of #17 should apply here as well.
anonfunc added the
enhancement
help wanted
labels 2021-07-09 22:51:30 +02:00

Hello, I have one of these CPUs:
11th Gen Intel® Core™ i7-1165G7
Subdirectories of glibc-hwcaps directories, in priority order:
x86-64-v4 (supported, searched)
x86-64-v3 (supported, searched)
x86-64-v2 (supported, searched)

How can i help ?

Hello, I have one of these CPUs: 11th Gen Intel® Core™ i7-1165G7 Subdirectories of glibc-hwcaps directories, in priority order: x86-64-v4 (supported, searched) x86-64-v3 (supported, searched) x86-64-v2 (supported, searched) How can i help ?
Author
Owner

Sorry to get back to you so late.

Sadly there are no feasible ways to build on third-party machines, simply because I currently can not see a way to keep the chain of trust in such a scenario (well, there is one, more below).

x86-64-v4 is going to be possible as soon as I get my hands on a machine that supports it (read as: my server provider offers such a server). Additionally, this machine would be rented for ALHP purposes only, so there needs to be secure financing, since I already provide the current build-server and I cannot and will not donate a second server out of my own pocket.

I do have one idea how distributed building would be doable (as in how build-artifacts could be validated), but that would require much work and at least building each package 2-3 times to validate it, and builds would need to be reproducible, so that build-artifacts are comparable and (hopefully) identical.

Sorry to get back to you so late. Sadly there are no feasible ways to build on third-party machines, simply because I currently can not see a way to keep the chain of trust in such a scenario (well, there is one, more below). `x86-64-v4` is going to be possible as soon as I get my hands on a machine that supports it (read as: my server provider offers such a server). Additionally, this machine would be rented for ALHP purposes only, so there needs to be secure financing, since I already provide the current build-server and I cannot and will not donate a second server out of my own pocket. I do have one idea how distributed building would be doable (as in how build-artifacts could be validated), but that would require much work and at least building each package 2-3 times to validate it, and builds would need to be reproducible, so that build-artifacts are comparable and (hopefully) identical.
Contributor

Just wanna chime in. Your idea with a x86-64-v2.5/x86-64-v2 in #17 would be great. AVX is quite a big step and I'm sad that my AVX capable machines cannot use it.

Sure, it was only included in two-three years of CPU development, but it was a pretty big step IMHO.

I got some Sandy Bridge CPUs here which would be pretty happy to run a x86-64-v2.5 :)


Anyway, back to topic - got an 11th Gen Intel(R) Core(TM) i5-1135G7 and I'm happy to help with beta testing or something like that. :)

Just wanna chime in. Your idea with a x86-64-v2.5/x86-64-v2 in #17 would be great. AVX is quite a big step and I'm sad that my AVX capable machines cannot use it. Sure, it was only included in two-three years of CPU development, but it was a pretty big step IMHO. I got some Sandy Bridge CPUs here which would be pretty happy to run a x86-64-v2.5 :) --- Anyway, back to topic - got an 11th Gen Intel(R) Core(TM) i5-1135G7 and I'm happy to help with beta testing or something like that. :)
Contributor

Hey @anonfunc,I saw that QEMU can actually emulate some AVX512 functions. For example:

x86: emulation support for AVX512 BFloat16 extensions

https://lists.nongnu.org/archive/html/qemu-devel/2019-12/msg02579.html

Would that be a solution? :)

Hey @anonfunc,I saw that QEMU can actually emulate some AVX512 functions. For example: ``` x86: emulation support for AVX512 BFloat16 extensions ``` https://lists.nongnu.org/archive/html/qemu-devel/2019-12/msg02579.html Would that be a solution? :)

Hey @anonfunc,I saw that QEMU can actually emulate some AVX512 functions. For example:

x86: emulation support for AVX512 BFloat16 extensions

https://lists.nongnu.org/archive/html/qemu-devel/2019-12/msg02579.html

Would that be a solution? :)

There are different types of AVX512 sets, so basicaly like the eventuality of a 2.5/2+ repo for avx1 only cpu, you would need a repo for basicaly each one/group supported by a cpu generation.

  • If im not wrong the emulation supports only F, CD, ER, PF, VL, DQ, BW.
  • Missing are: IFMA, VBMI, 4VNNIW, 4FMAPS, VPOPCNTDQ, VNNI, VBMI2, BITALG, VP2INTERSECT, GFNI, VPCLMULQDQ, VAES.

And there exist some odd combinations of older cpus and mainboards blowing up if used with AVX512. So research and disclaimer in that matter is strongly needed here.

PS: x86-64-v4: AVX512F, AVX512BW, AVX512CD, AVX512DQ, AVX512VL -> Meaning building with qemu is possible.

> Hey @anonfunc,I saw that QEMU can actually emulate some AVX512 functions. For example: > > ``` > x86: emulation support for AVX512 BFloat16 extensions > ``` > https://lists.nongnu.org/archive/html/qemu-devel/2019-12/msg02579.html > > Would that be a solution? :) There are different types of AVX512 sets, so basicaly like the eventuality of a 2.5/2+ repo for avx1 only cpu, you would need a repo for basicaly each one/group supported by a cpu generation. * If im not wrong the emulation supports only F, CD, ER, PF, VL, DQ, BW. * Missing are: IFMA, VBMI, 4VNNIW, 4FMAPS, VPOPCNTDQ, VNNI, VBMI2, BITALG, VP2INTERSECT, GFNI, VPCLMULQDQ, VAES. And there exist some odd combinations of older cpus and mainboards blowing up if used with AVX512. So research and disclaimer in that matter is strongly needed here. PS: x86-64-v4: AVX512F, AVX512BW, AVX512CD, AVX512DQ, AVX512VL -> Meaning building with qemu is possible.
Author
Owner

To reiterate what I wrote in #172: x86-64-v4 will probably come within the next month or two, since my server provider finally got one of amd's 7xxx series in the product lineup.

I'm not sure how I will handle the build for the new march. I'm currently leaning towards running a separate ALHP instance for v4, and merging them as soon as the initial build is finished (to not delay builds on the other marchs).

To reiterate what I wrote in #172: x86-64-v4 will probably come within the next month or two, since my server provider finally got one of amd's 7xxx series in the product lineup. I'm not sure how I will handle the build for the new march. I'm currently leaning towards running a separate ALHP instance for v4, and merging them as soon as the initial build is finished (to not delay builds on the other marchs).

So @anonfunc and me tried to emulate avx512 in qemu/libvirt. The emulation stated in the changelog of qemu 4.2 seems to either not work or exist at all. Any attempt to emulate the flags themselves or a complete cpu results in the vm either not booting or throwing tons of warnings while the avx512 flags are still not available. Any help in that matter is welcome. For now it seems that the stated "emulation" to have another meaning that is not further described by documentation.

So @anonfunc and me tried to emulate avx512 in qemu/libvirt. The emulation stated in the changelog of qemu 4.2 seems to either not work or exist at all. Any attempt to emulate the flags themselves or a complete cpu results in the vm either not booting or throwing tons of warnings while the avx512 flags are still not available. Any help in that matter is welcome. For now it seems that the stated "emulation" to have another meaning that is not further described by documentation.
Author
Owner

To be specific, its seems like the combination of kvm + emulated cpu features is not supported by qemu. Pure qemu (without kvm) should work (not tested), but that would be really slow. Probably not a real option.

To be specific, its seems like the combination of kvm + emulated cpu features is not supported by qemu. Pure qemu (without kvm) *should* work (not tested), but that would be **really** slow. Probably not a real option.
anonfunc pinned this 2023-08-29 16:50:22 +02:00

just for better understanding - you said here:

To reiterate what I wrote in #172: x86-64-v4 will probably come within the next month or two, since my server provider finally got one of amd's 7xxx series in the product lineup.

but now it is delayed due to issues with qemu and emulating AVX512.

I guess you are trying to emulate AVX512 because getting a server with AVX512 is not a viable option?
Or is the issue also appearing on an AVX512 capable server? If the later is the case, could you share details on which provider you are using so that we can also try to look what could be the issue?

I am using a Ryzen 7700X in my desktop machine and could also try to gather data locally to help.

just for better understanding - you said here: > To reiterate what I wrote in #172: x86-64-v4 will probably come within the next month or two, since my server provider finally got one of amd's 7xxx series in the product lineup. but now it is delayed due to issues with qemu and emulating AVX512. I guess you are trying to emulate AVX512 because getting a server with AVX512 is not a viable option? Or is the issue also appearing on an AVX512 capable server? If the later is the case, could you share details on which provider you are using so that we can also try to look what could be the issue? I am using a Ryzen 7700X in my desktop machine and could also try to gather data locally to help.
Author
Owner

@BS86 My servers are currently from Hetzner's AX-Line. Sadly they have increased prices in the last year in a way that it is not currently economically viable for me to get one with Ryzen 7XXX just for this project. I'll probably switch to a Ryzen 7000 series CPU pretty soon too (on my desktop), so I'm also pretty interested in building for v4. Building on my desktop is not something I want to do (obviously).

In fact, the build server has been running on my home server since earlier this year, so that the Hetzner ones only do distribution (tier 0 mirror, so to say). Upgrading that one (currently on AM4) is sadly also not something I can spend money on right now.

@BS86 My servers are currently from [Hetzner's AX-Line](https://www.hetzner.com/dedicated-rootserver/matrix-ax). Sadly they have increased prices in the last year in a way that it is not currently economically viable for me to get one with Ryzen 7XXX just for this project. I'll probably switch to a Ryzen 7000 series CPU pretty soon too (on my desktop), so I'm also pretty interested in building for v4. Building on my desktop is not something I want to do (obviously). In fact, the build server has been running on my home server since earlier this year, so that the Hetzner ones only do distribution (tier 0 mirror, so to say). Upgrading that one (currently on AM4) is sadly also not something I can spend money on right now.
Author
Owner

I have news on this one. I can probably get my hands on an AM5 cpu for my home server in the next couple of months. It's not guaranteed yet, but it looks promising for now.

I'll keep this issue updated.

I have news on this one. I can probably get my hands on an AM5 cpu for my home server in the next couple of months. It's not guaranteed yet, but it looks promising for now. I'll keep this issue updated.

Thanks for the update, that would be great!

After asking ChatGPT and doing some googling afterwards this weekend it looks like it should be possible to cross-compile binaries for the x86_64-v4 target with -march even on hardware that is only capable of x86_64-v3. Naturally, the build system can't run those binaries in such a setup but it should not do that anyway.

Would cross-compilation work in your build system?

Thanks for the update, that would be great! After asking ChatGPT and doing some googling afterwards this weekend it looks like it should be possible to cross-compile binaries for the x86_64-v4 target with `-march` even on hardware that is only capable of x86_64-v3. Naturally, the build system can't run those binaries in such a setup but it should not do that anyway. Would cross-compilation work in your build system?
Author
Owner

Sadly not, because a lot of packages have different compile stages that depend on previously build binarys. Another big chunk of packages would fail in testing, which is necessary since Arch does not support building packages without tests.

It's obviously not all of them, and a subset would probably build fine. I tested this a few years back, and it was not an awe aspiring amount.

Sadly not, because a lot of packages have different compile stages that depend on previously build binarys. Another big chunk of packages would fail in testing, which is necessary since Arch does not support building packages without tests. It's obviously not all of them, and a subset would probably build fine. I tested this a few years back, and it was not an awe aspiring amount.
Author
Owner

I just started the v4 build. Should take a couple of weeks max, probably less. I'll keep this post updated.

Progress: 0/6232 packages left

I just started the v4 build. Should take a couple of weeks max, probably less. I'll keep this post updated. **Progress**: 0/6232 packages left
Contributor

Great news, thanks for the work!

Great news, thanks for the work!

Very Very Nice! Thanks!
Is there a status page where one can track the progress of the v4 build? Or will it only be added to the main status page once it is in production?

Very Very Nice! Thanks! Is there a status page where one can track the progress of the v4 build? Or will it only be added to the main status page once it is in production?
Author
Owner

Very Very Nice! Thanks!
Is there a status page where one can track the progress of the v4 build? Or will it only be added to the main status page once it is in production?

It's currently building on a separate instance, so that we do not interrupt building for v2/v3. Once its finished, I'll merge it with the main instance. I'll keep the post above updated so you can have a rough estimate.

> Very Very Nice! Thanks! > Is there a status page where one can track the progress of the v4 build? Or will it only be added to the main status page once it is in production? > It's currently building on a separate instance, so that we do not interrupt building for v2/v3. Once its finished, I'll merge it with the main instance. I'll keep the post above updated so you can have a rough estimate.

Progress: 5576/6232 packages left

Nice progress already.

It looks like the Grafana - System Metrics at the top of the status page are showing the host system metrics where both instances run? System looks busy for the last 2,5 hours but nothing is in status "building" or "queued" - also RAM usage is about two times the limit ;)

> **Progress**: 5576/6232 packages left Nice progress already. It looks like the Grafana - System Metrics at the top of the status page are showing the host system metrics where both instances run? System looks busy for the last 2,5 hours but nothing is in status "building" or "queued" - also RAM usage is about two times the limit ;)
Author
Owner

Correct. I removed the limits since memory-based queueing seems to work well and we have some air to breath (memory-wise) now :)

Correct. I removed the limits since memory-based queueing seems to work well and we have some air to breath (memory-wise) now :)

Progress: 5308/6232 packages left

ChatGPT 3.5 thinks that the build will be done by tomorrow 17:00 MEZ based on the timestamps of your edits and the numbers posted - that's its full answer:
I am pretty sure that some longer running builds are still in the queue and that some pretty fast builds were in the already build ones, but I guess your estimate of "weeks" can easily be beaten :)

To estimate when the task will be completed based on the provided historical data, we can calculate the rate of package completion per unit time and use that rate to project when the remaining packages will be finished.

Let's calculate the rate of completion between the given time points:

1. From 17:13 to 19:40 (2 hours and 27 minutes):
   - Packages completed: 6232 - 5576 = 656 packages
   - Rate: 656 packages / 147 minutes ≈ 4.46 packages per minute

2. From 19:40 to 20:25 (45 minutes):
   - Packages completed: 5576 - 5390 = 186 packages
   - Rate: 186 packages / 45 minutes ≈ 4.13 packages per minute

3. From 20:25 to 20:45 (20 minutes):
   - Packages completed: 5390 - 5308 = 82 packages
   - Rate: 82 packages / 20 minutes = 4.1 packages per minute

Now, let's use the average rate per minute to estimate the time required to complete the remaining packages:

Average rate: (4.46 + 4.13 + 4.1) / 3 ≈ 4.23 packages per minute

Remaining packages: 5308 packages

Estimated time to completion: 5308 packages / 4.23 packages per minute ≈ 1255 minutes

Now, add this estimated time to the latest timestamp:

9. Dez. 2023 20:45 MEZ + 1255 minutes ≈ 10. Dez. 2023 17:00 MEZ

So, based on the historical data and the average rate, the task is estimated to be completed around 17:00 MEZ on December 10, 2023. Keep in mind that this is a rough estimate, and the actual completion time may vary based on factors like changes in the completion rate.

Edit: With the latest update from now, it thinks we will be done by 16:42:

Certainly, let's incorporate the new data point into the calculation.

    From 20:45 to 21:35 (50 minutes):
        Packages completed: 5308 - 5097 = 211 packages
        Rate: 211 packages / 50 minutes = 4.22 packages per minute

Now, let's update the average rate with this new data point:

Average rate: (4.46 + 4.13 + 4.1 + 4.22) / 4 ≈ 4.225 packages per minute

Remaining packages: 5097 packages

Estimated time to completion: 5097 packages / 4.225 packages per minute ≈ 1207 minutes

Now, add this estimated time to the latest timestamp:

    Dez. 2023 21:35 MEZ + 1207 minutes ≈ 10. Dez. 2023 16:42 MEZ

So, with the additional data point, the updated estimate is that the task will be completed around 16:42 MEZ on December 10, 2023. Keep in mind that these calculations are based on historical data and assumptions of a consistent completion rate, and actual completion time may still vary.
> **Progress**: 5308/6232 packages left ChatGPT 3.5 thinks that the build will be done by tomorrow 17:00 MEZ based on the timestamps of your edits and the numbers posted - that's its full answer: I am pretty sure that some longer running builds are still in the queue and that some pretty fast builds were in the already build ones, but I guess your estimate of "weeks" can easily be beaten :) ``` To estimate when the task will be completed based on the provided historical data, we can calculate the rate of package completion per unit time and use that rate to project when the remaining packages will be finished. Let's calculate the rate of completion between the given time points: 1. From 17:13 to 19:40 (2 hours and 27 minutes): - Packages completed: 6232 - 5576 = 656 packages - Rate: 656 packages / 147 minutes ≈ 4.46 packages per minute 2. From 19:40 to 20:25 (45 minutes): - Packages completed: 5576 - 5390 = 186 packages - Rate: 186 packages / 45 minutes ≈ 4.13 packages per minute 3. From 20:25 to 20:45 (20 minutes): - Packages completed: 5390 - 5308 = 82 packages - Rate: 82 packages / 20 minutes = 4.1 packages per minute Now, let's use the average rate per minute to estimate the time required to complete the remaining packages: Average rate: (4.46 + 4.13 + 4.1) / 3 ≈ 4.23 packages per minute Remaining packages: 5308 packages Estimated time to completion: 5308 packages / 4.23 packages per minute ≈ 1255 minutes Now, add this estimated time to the latest timestamp: 9. Dez. 2023 20:45 MEZ + 1255 minutes ≈ 10. Dez. 2023 17:00 MEZ So, based on the historical data and the average rate, the task is estimated to be completed around 17:00 MEZ on December 10, 2023. Keep in mind that this is a rough estimate, and the actual completion time may vary based on factors like changes in the completion rate. ``` Edit: With the latest update from now, it thinks we will be done by 16:42: ``` Certainly, let's incorporate the new data point into the calculation. From 20:45 to 21:35 (50 minutes): Packages completed: 5308 - 5097 = 211 packages Rate: 211 packages / 50 minutes = 4.22 packages per minute Now, let's update the average rate with this new data point: Average rate: (4.46 + 4.13 + 4.1 + 4.22) / 4 ≈ 4.225 packages per minute Remaining packages: 5097 packages Estimated time to completion: 5097 packages / 4.225 packages per minute ≈ 1207 minutes Now, add this estimated time to the latest timestamp: Dez. 2023 21:35 MEZ + 1207 minutes ≈ 10. Dez. 2023 16:42 MEZ So, with the additional data point, the updated estimate is that the task will be completed around 16:42 MEZ on December 10, 2023. Keep in mind that these calculations are based on historical data and assumptions of a consistent completion rate, and actual completion time may still vary. ```
anonfunc removed the
help wanted
label 2023-12-10 15:19:31 +01:00

Once ready, time for Reddit crosspost :)

@anonfunc don't forget to add an additional general disclaimer about Intel CPU/Mainboard/UEFI/intelucode combinations that are either dangerous (CPU and MB thermal or electrical damage) or with updated UEFI/ucode will throttle the CPU back to Pentium 3 performance while using AVX512 aka. not worth the performance hit.

For others that might land here looking for a related solution: Intel also had questionable decisions about thermal designs for their AVX512 CPUs up to the year 2021. For example sometime using cheap silicone paste instead of soldering the heatspreader. Or releasing hardware without testing AVX512 needs. Some other more modern Intel CPUs also got AVX512 fused off either by hardware depending on a later CPU stepping or by microcode because they couldn't handle the heat/power.

TLDR: Use a beefy CPU cooler using v4 with Intel and also check if your motherboard can handle the additional requested amperage because of cooling meanwhile also not overloading power related components in the process. Check beforehand if your system is up to the task. Some Intel motherboards allow to bypass the AVX512 heat/power safety feature, but i wouldn't recommend using it. Beheading the Intel CPU's heatspreader is quite a popular but also dangerous modification.

Thankfully, modern AMD hardware doesn't have any of this issues. Just count for the additional thermal load.

@anonfunc is not responsible for any related damage.

Once ready, time for Reddit crosspost :) @anonfunc don't forget to add an additional general disclaimer about Intel CPU/Mainboard/UEFI/intelucode combinations that are either dangerous (CPU and MB thermal or electrical damage) or with updated UEFI/ucode will throttle the CPU back to Pentium 3 performance while using AVX512 aka. not worth the performance hit. For others that might land here looking for a related solution: Intel also had questionable decisions about thermal designs for their AVX512 CPUs up to the year 2021. For example sometime using cheap silicone paste instead of soldering the heatspreader. Or releasing hardware without testing AVX512 needs. Some other more modern Intel CPUs also got AVX512 fused off either by hardware depending on a later CPU stepping or by microcode because they couldn't handle the heat/power. TLDR: Use a beefy CPU cooler using v4 with Intel and also check if your motherboard can handle the additional requested amperage because of cooling meanwhile also not overloading power related components in the process. Check beforehand if your system is up to the task. Some Intel motherboards allow to bypass the AVX512 heat/power safety feature, but i wouldn't recommend using it. Beheading the Intel CPU's heatspreader is quite a popular but also dangerous modification. Thankfully, modern AMD hardware doesn't have any of this issues. Just count for the additional thermal load. @anonfunc is not responsible for any related damage.
Author
Owner

@InternetD That's already covered by the license.

@InternetD That's already covered by the license.

@anonfunc Unless you were planning on doing so already, it would probably be a good idea to give the mirror maintainers a heads up when you exactly plan on pushing these packages onto alhp.dev so one can plan with increased activity due to syncing a completely new repo and people upgrading their systems.
Oh, and also, since this would mean an additional 40GB in space that has to be accounted for.

Great effort nonetheless and a thanks to all contributors.

@anonfunc Unless you were planning on doing so already, it would probably be a good idea to give the mirror maintainers a heads up when you exactly plan on pushing these packages onto alhp.dev so one can plan with increased activity due to syncing a completely new repo and people upgrading their systems. Oh, and also, since this would mean an additional 40GB in space that has to be accounted for. Great effort nonetheless and a thanks to all contributors.
Author
Owner

@SunRed Sure, I'll do that. Obviously I cannot predict when exactly the build will finish, but I'll give the mirror maintainers a time window.

@SunRed Sure, I'll do that. Obviously I cannot predict when exactly the build will finish, but I'll give the mirror maintainers a time window.

[...] and people upgrading their systems.

At least the Cloudflare-Mirror should not have an issue with that due to how Cloudflare's CDN works ;)

I guess when already using the v3 repo, one first has to disable the ALHP repo's, do a sudo pacman -Syyuu to get back to the Arch linux packages and after that enable the v4 repo's and do a simple update?
Or did you do a special 0.2 - suffix for the first batch of v4 packages so that it is seen as an update to the v3 versions?

> [...] and people upgrading their systems. At least the Cloudflare-Mirror should not have an issue with that due to how Cloudflare's CDN works ;) I guess when already using the v3 repo, one first has to disable the ALHP repo's, do a `sudo pacman -Syyuu` to get back to the Arch linux packages and after that enable the v4 repo's and do a simple update? Or did you do a special 0.2 - suffix for the first batch of v4 packages so that it is seen as an update to the v3 versions?
Author
Owner

@BS86 I did not, no. That would have been a good idea, oh well, too late for that. The correct path now, as you have described, would be to downgrade to official packages and then upgrade to v4.

@BS86 I did not, no. That would have been a good idea, oh well, too late for that. The correct path now, as you have described, would be to downgrade to official packages and then upgrade to v4.
Author
Owner

Merge completed. v4 is now online and should be syncing to the tier 0 mirror soon.

Syncing to the tier 0 mirror can take some time, since my upload speed is very limited. Should be done in a few hours max.

Upload progress: 50%

Merge completed. v4 is now online and should be syncing to the tier 0 mirror soon. Syncing to the tier 0 mirror can take some time, since my upload speed is very limited. Should be done in a few hours max. **Upload progress**: 50%

README.md and Status Page still need updates - but I guess those will come once the mirrors have synced?

README.md and Status Page still need updates - but I guess those will come once the mirrors have synced?
Author
Owner

@BS86 Just pushed the readme changes.

@BS86 Just pushed the readme changes.

just tried to update and found another thing to do when switching: After downgrading and before upgrading, the pacman cache has to be cleared with sudo pacman -Scc, if not done, the packages still in there from v3 will be used instead of downloading the v4 ones.

just tried to update and found another thing to do when switching: After downgrading and before upgrading, the pacman cache has to be cleared with `sudo pacman -Scc`, if not done, the packages still in there from v3 will be used instead of downloading the v4 ones.
Author
Owner

Good point, added it to the FAQ.

Good point, added it to the FAQ.

mirrors apparently have not yet fully synced, that's why I have not done the update yet - just scanned the already found changes and saw rather many cache hits in pacman's VerbosePkgLists. Will try the actual update tomorrow.

Another thing: On the status page, rather many of the v4 rebuilds have failed, the ones I especially checked are systemd, linux-zen, pipewire and firefox - can you take a look into why they failed and maybe push another rebuild?

mirrors apparently have not yet fully synced, that's why I have not done the update yet - just scanned the already found changes and saw rather many cache hits in pacman's `VerbosePkgLists`. Will try the actual update tomorrow. Another thing: On the status page, rather many of the v4 rebuilds have failed, the ones I especially checked are systemd, linux-zen, pipewire and firefox - can you take a look into why they failed and maybe push another rebuild?

Since he merged the v2/v3 and the v4 instance it surely will. At least for the kernel the performance impact of missing AVX512 is non existent.

Since he merged the v2/v3 and the v4 instance it surely will. At least for the kernel the performance impact of missing AVX512 is non existent.
Author
Owner

Another thing: On the status page, rather many of the v4 rebuilds have failed, the ones I especially checked are systemd, linux-zen, pipewire and firefox - can you take a look into why they failed and maybe push another rebuild?

I think firefox is failing on all levels currently, have not looked at why its failing yet. I'll have a look at the rest if I find time.

I also fixed another bug where the repo-add script only added a fraction of the v4 extra repo, which is now fixed.

> Another thing: On the status page, rather many of the v4 rebuilds have failed, the ones I especially checked are systemd, linux-zen, pipewire and firefox - can you take a look into why they failed and maybe push another rebuild? I think `firefox` is failing on all levels currently, have not looked at why its failing yet. I'll have a look at the rest if I find time. I also fixed another bug where the `repo-add` script only added a fraction of the v4 extra repo, which is now fixed.

Thanks. It still only found 500MB of updates (yesterday 300MB) but after the fix, 1,5GB of updates were found and installed (The downgrade was 2GB, but from what I found out, you are using a much higher compression for zstd compared to Arch, so that seems ok).
So far, everything is running great.

At least for the kernel the performance impact of missing AVX512 is non existent.

It is not missing AVX512, it is missing all optimizations as it is the Arch package now.

Thanks. It still only found 500MB of updates (yesterday 300MB) but after the fix, 1,5GB of updates were found and installed (The downgrade was 2GB, but from what I found out, you are using a much higher compression for zstd compared to Arch, so that seems ok). So far, everything is running great. > At least for the kernel the performance impact of missing AVX512 is non existent. It is not missing AVX512, it is missing all optimizations as it is the Arch package now.
Author
Owner

Closing as completed.

Closing as completed.
anonfunc unpinned this 2023-12-19 02:30:42 +01:00
Sign in to join this conversation.
No description provided.