MRTestRepo/Notes

798 lines
65 KiB
Plaintext

2022-04-10 AM by KsWoodsMan
Been a month and been plenty busy.
The RepoUpdater is behaving well. It is at a stable state and has been production ready.
Manuel has brought in installing additional packages during the OS install.
Progress there has been steady with quite a bit to show for it.
I've created a script to build the modules from a formatted list.
It reads the list line by line testing for conditions to apply to its output.
I've given it a little but of error checking so it doesn't do screwy things when ran on a autogenerated file it has made.
It's "hacky" and may lack finese but it's working. And makes fully flesh out modules usable in calamares.
Our package lists are rather large. They can stand to be trimmed and re-organized.
2022-03-12 PM by KsWoodsMan
Still working on the repo updater.
Parts of it might best partly broken out so local builds benefit from the updating as well.
Having pepbld.sh run the updater means their repo is updated with each build.
Having the build server run each of the parts also means the developers repos could be updated nightly.
It could also mean their core files used for the release builds would be changed , daily.
For testing purposes, we still have our nightly and testing builds that can be completely unique.
2022-03-11 PM by KsWoodsMan
Working on the updater has presented several glitches in the way I was comsidering its implementation.
As the team grows and tasks/responsibilities are reallocated I've been trying to keep additional repos "aligned" to a single core repo.
The inherent problem is that there hasn't been an actual "base repo" where all of the files that make Peppermint what it is are located.
The KSTestRepo has been developement repo where these common files were located.
With only a single architecture build , this hasn't been a problem.
Keeping Devuan builds in mind, the changes being made had extra builds in mind.
Quickly adding the x86_32 builds has shown areas I hadn't considered.
I need to move away from the common files being located in the amd64 build and have a "core files" repo where we all go to get and update the common files.
This will work _much_ better at keeping the small differences from influencing the production repo.
If it's found that any of the common files breaks something in different builds, that file can be "ignored" when updating common files.
There needs to be a "first person" approach where a developer can keep their build files intact as they work on their differences.
The biggest difference are currently in peploadersplash, pepcal/calamares/modules and peppackages.
These will be the files getting updated in the unique builds in the production repo.
The other differences are in the build scripts in the root of the repo directory.
Select files , intended for the release , will also be included in the unique build directories inside PepDistroConfigs.
Then a "second person" approach to maintain the common files used by the developers for their repo.
This way there can be a "core team" for this one repo.
This will keep changes in the x86_64 builds from breaking things in the x86_32 and Devuan builds and the reverse since individual changes won't show up in the core repo.
The production repo will also need to be updated from the common or "core repo"
For a "third person" , they won't have to get all several repos to make any of the ISOs.
Cloning PepDistroConfigs will provide a central place to build any of the ISOs we've released.
At the time of a release , PepDistroConfigs will be updated with the most recent files from the outside repos and each of the unique builds.
Reducing the complexity for both the devs and end users is paramount.
Whatever goes into place must be easily repeatable for the each of the developers, as well as anyone giving it a try from the repo.
Not everoyne will have cloned the repo to ${HOME}/pep_builder, so the scripts for the updater need to be relative to the current directory.
When adding a "core repo" and to reduce some of the confusion, the build repo I've been working in could be renamed to PepOSx86_64 or similar.
To follow the same naming convention the PepI686Configs and PepDevuanConfigs could be renamed to PepOSx86_32 and PepOSDevuan respectively.
Then the directories going into PepDistroConfigs (PepOSx86_32, PepOSx86_64 & PepOSDevuan) will make more sense and won't seem disjointed.
We will all have to be in unison about this change. In the long run it will help immensely keeping things aligned in the release builds.
Check with the other devs to see how they are about renaming their repo.
External Repos Sattelite/Build Repositories ||| Build Server
o->--> PepOSx86_32 >--------->--------o->-- nightly
PepProPixMaps >--o | |
| +->--> PepOSx86_64 >------->------o-> | >-- nightly
PepProtools >----+ | | |
| +->--> PepOSDevuan >----->----o-> | | >-- nightly
PepUpdater >-----+ | | | |
| +->--(FutureBuilds)>--->--o-> | | | >-- nightly
o---<----<---o | | | | |
| | | | | |
CoreFiles >--->---o-->--o | | | |
\
`----->-------------->----- PepDistroConfigs
\
`---->-- Release ISO's
2022-03-01 PM by KsWoodsMan
My last 10 days have been quiet here. I've been working in several areas at once.
1) Combining the new x86_32 build into what is existing in the production repo at PepDistroConfigs .
2) Working on implementing the package additions from the x86_32 builds into the x86_64 builds.
3) Working on scripting to build an effective list , with full descriptions , from the .yaml files provided.
4) Adding in the source file, for the .yaml script, whether to certain packages are installed , by default , and allow the enduser to unselect them, if desired.
5) For offline installs, working out which additional packages will be installed by default and adding them and their dependencies to ./pool on the ISO.
6) To get our sources.list in the live-session and available to the installer.
7) Yet another full re-write of the autoupdater to take in fine grained differences between the 2 separate builds incorporating them into the Production Repo.
8) To keep the auto-updater from interferring with the builds, I'm considering implementing it in the BldHelper-*.sh script for the final build.
This way when a new release is cut it will pull from the adjacent repos, updating itself and push the changes for the new release to the public repo.
8) Continuing to keep the new Devuan builds in mind using free time and breaks to search for instances where they are using Live-Build instead of the Devuan SDK
Though nothing is wrong with the std SDK it doesn't fit well with the current toolset.
9) Working towards a distro update so people that have installed from earlier versions can bring their installs up-to-date with the most recent release.
Keeping in mind that some users may have modified their system as they make it their own, not breaking these systems and not causing data loss.
10) Scripting the "changing of the ISOs" so this doesn't have to be babysat or manually tended to. After a release ISO gets cut it updates the SF and GitHub locations.
11) Looking at the multi language support for the x86_32 builds to also incorporate this into the x86_64 builds and later into the Devuan builds.
12) And then there is _always_ the additional time given to questions and comments in the forums.
* come back and add the updates to each of these, in footnoted form, in this same section or follow up reffering back to this entry.
2022-02-21 PM by KsWoodsMan
More housekeeping was done removing unwanted packages , as voiced from within the community.
Starting to add the newer contributed version of calamares.
Notice was given to look over the driver modules carefully as there will be diffences between architectures.
Some minor additions/changers were made in the adjacent x86_32 repo to take in the common files for the distro specific apps.
This should make templating for the Devuan builds easier. As well as good pointers for use in the auto updater.
This might be where my previous idea of having "skinny trees" will work.
To have a directory for each build inside PepDistroConfigs.
In each of the mini repos is the files specific to that build.
Along side those build specific files/directories are symlinks pointing back to the main repo in the parent directory, for common files.
As well as a symlink in the "skinny repo" pointing to fusato in the parent directory.
During the weekly updates to the main repo the contents of ../PepProPixMaps and ../PepProTools will get copied into place in PepDistroConfigs/PepProPixMaps and PepDistroConfigs/PepProTools.
This way all work submitted to the 2 previously mentioned repos prior a few minutes before build time will be included in the release build.
And due to the relative nature of symlinks, while the build is running in fusato and goes back to the source directory, it will find either the symlinks pointing to the common directories in the parent
or it will find it's uniquie directories and files required to buld an ISO in it's mini-repo directory.
I don't even think any tweaking of the "pepbld.sh" files will be required. They can go into the repective mini repo, unchanged.
The get the unique builds to run from the parent repo, create a script in the parent directory of PepDistroConfigs for each of the builds. Call it Build-amd64-ISO.sh or Build-i686-ISO or Build-Devuan-ISO ... you get the picture. ;)
This script makes sure a build directory exists or will create it for a first time build. It can also take care of renaming the ISO , much like is done in the BldHelper scripts meant for the build server.
The script will cd to the mini-repo and start a build from there. All of the symlinks to requred directories (in the parent) will be in place and the unique files for it's build will be in the mini-repo.
Each build maintainer has their own repo.
Each of the unique builds are separated by as much as is required.
Since the Main repo gets updated on a regular schedule anyone accessing the repo can get this one repo and be able to build any of our ISO's being offered.
There shouldn't be any confusion over missing files or a need to pull in adjacent repos.
Everything will be in one easy place to find and can start their build immediately.
If the builds can't share "fusato" build directories
Each build repo needs to follow some predfined templating in the layout, to match their counterparts,
"Smart" & "Pretty". I like it. Now to start implementing it.
2022-02-21 AM by KsWoodsMan
Still more HouseKeeping done. This includes removing peptools in favor of the adjacent PepProTools repo holding distro specific apps.
Getting ready for a transition to a newer and more capable version of the calamares installer.
I'll be working this into the testing builds then nightly and finally to the release builds.
Having the builds all separated will take some "creativity" to mesh them all into the main repo.
LOL - "The more creatively you try to tweak things the creatively they can break."
I think it is doable. It will require a much finer grained updater than initially expected.
Until the batch updater is fully functional, there will be some minor difference made in the build server's crontab for the release build.
Still more HouseKeeping in this repo. More of the x86 files found and removed.
Noticed dissimularities between the directions the builds may be taking.
Some may be good across the board.
Not wanting to stray to far from what I know of as the core principles I'll hesitate to include all of them, till we are all in agreement.
2022-02-20 by KsWoodsMan
Added 3 additional packages to the nightly build. These include gufw, mugshot and simple-scan. Dropping qt5ct from the testing builds as it no longer seems warranted.
A GUI interface for iptables seems reasonable and mugshot doesn't take much space to appease some crowds.
And then simple-scan that I'd forgotten to include/test with a network scanner during the final testing phases.
After making a new repo to accomodate the 686-pae builds much HouseKeeping was needed in this directory.
Separating the builds allows me to more fully focus on dev of the amd64 version and the upcoming Devuan version.
2022-02-19 by KsWoodsMan
In the 3 external repos used for common files, the underscores ( _ ) were removed from their names.
The BldHelper scripts wered edited to reflect the new change.
Be sure to adjust any local git directories for this change as well as changing the PepBld scripts to relect the new changes.
Proposed adjustment to the version od calaraes being used was accepted with enthusiasm giving many additional option during the OS install.
Also, Net Install options were considered and agree to ne an upcoming "feature add".
2022-02-18 by KsWoodsMan
Additional repos were added to Peppermint_OS . These include Pep_Pro_PixMaps and Pep_Dev_Tools as well as renaming Pep_Hub to Pep_Pro_Tools.
These will be used as common files to be sure apps in /opt/pypep have the most current set of icons available while building ISOs.
Updated BldHelper-*.sh scripts used by the server for automated builds.
The PepBld-i686-*.sh files will need 3 lines edited or removed / moved to a new area at the end of the build script(s) to find these current files.
2022-02-13
In the forum, at https://forum.peppermintos.com/index.php/topic,11171.msg107269.html ,
ManuelRosa has expressed enough interest in a 32-bit version to have gone to the trouble of posting the required edits for the build.
All the required information was in one place. It was well presented and easy to follw (: nicely done ! :) KUDOs !!!
This looks like a good thing, for some. Be sure to watch this thread for involvement and acceptance.
Being a "community driven project" , I see no reason, at this point, not to give it firm consideration.
Some of the web reviews I've heard of, have seemed disappointed saying "They aren't going to offer 'this'."
That I know of, there hasn't been a conscious decision made NOT to include this.
The first thoughts were to get the amd64 version out *first*, then follow it with additional builds.
2022-02-12 AM
Just finished getting the Snap Store in hub to work. I hope.
Fixed the old error in hub.py involving btnspm/lblspm to btnsnst/lblsnst .
Added ssst to run an installer after snapd is recognized as installed when the hub starts.
Added a symlink in pephooks/normal/0600-... from snap to /usr/bin/snap-store.
Added a test for /snap/bin to exist. this shows up after `sudo snap install snap-store && snap-store` is run/
Added more to that , if not exist then install it using snap.
This is after countless fresh installs because purging snapd wasn't sufficient at removing everything.
Logged into the build server to build a new release ISO which includes current fixes in the new Peppermint Hub.
I did mention the existance of the replacement for Release 2022-02-11 that fixes the PepHub troubles.
A new peppackages.py is in the works. So It's unclear to me if the current revision will be updated immediately so the hub still works after snapd is installed
or if the current release will be skipped waiting for the newer peppackackages.py .
I REALLY need to get the proceedure from G how to update the release files at GitHub and SF .
Having this info, I could script it into a cron job on the DO server.
Checked persistence with a /home partition. It added 10 seconds to boot-up. OK not bad, it's a really slow USB 2 drive.
the last test, adding /opt to persistence was painful. Check later if this has improved.
Then try it with the slow USB for full persistence. Comparing that to Pep-10 on the ame USB and with a USB-3 drive with far better specs.
noon: I checked into the build server to see why a new build hadn't happened for the testing build.
Something in the build caused it to crash but not sure at what point. The following build overwrote the tmp logfile.
Fix this in the housekeeping portion of the BuildHelpers so even if the build exits early the logs still get moved out of the way for the next build.
PM: Added a comment in the checksum file of the release ISO showing the DOB of the ISO.
This should make it easier to verify that a newer ISO is available to devs and end-users.
Not sure how well it is going to work out trying to also check pre-dev repos for new files to include those files in the Dev Repo.
This seems better placed with the author/maintainer there. When they are ready for files to end up in the Production repo it would seem more natural to have the author push them to the De-Repo.
Then when edits are found to be needed , in the Dev repo , for the editor to push those changes from the dev repo back to the Pre-Dev as well.
This would to help keep old errors from being reintroduced.
Directly mentioning certain files, in the dev repo, have changed Has Not been effective at keeping previous errors from being re-introduced from pre-dev.
Otherwise, The auto-updater will probably have to be rewritten to use rsync to scan raw files in the online repo.
Instead of the much simpler method of looking for local changes between the Dev repo (KSTestRepo) and the Production repo at PepDistroConfigs.
There were times weeks went by that finished files were sitting in a pre-dev repo and not pushed to the production or dev repo.
Some of the finished files might have been pushed to the production repo as well.
Previously, this has caused much confusion of what was or wasn't a final version/revision.
Also several setbacks occured , having the final version was in production repo, but never showing up in Developement for testing by others.
Also, Never having gone through the dev repo, no review was ever done on them.
Because these final versions never appeared in the dev repo (as expected) they were re-written with older versions or not included.
This may stem from my not having worked in the same shop as other dev's and not being fully aware of the S.O.P. in use by them.
I do think it is prudent, making for a smoother workflow, for the _author_ or _maintainer_ of cetain files/dirs to move their changes from pre-dev to the dev Repo.
If edits in the *Dev Repo* happen _and_ a pre-dev repo exists where the originals reside, then it also makes sense for the edits to pushed to where the pre-dev work exists.
This way, the orginal author sees and gets the newest edited files when they renew local copies by pulling from the pre-dev repo.
2022-02-11
Pulled down the web copy to try and burned a fresh one from testing.
The release was looking good till someone said the SnapStore in the hub wasn't working, again.
He gave an example of the error pointing to line 233. YUP ! an old error keeps finding it's way back into that file.
After snapd is installed in the system the hub looks at it differently and calls another app than the original.
Thats fine except the new action isn't defined and nothing is written there for it.
Wrote a how-to for setting the pinning as expected with unstable only available when explicitly called upon.
What took 20 minutes in some web shows takes less than 2 minutes and BANG ! its done and working completely as expected.
Figured out that LUKS encryption DOESNT work for the / partition, during an install.
It wasn't spelled out exactly.
That fact was merrily danced around with "can be used to encrypt /home or swap partitions and LVM's, except for boot.
Just because / wasn't mentioned ... was and should be No indicator it works on / . OK issue closed.
Workarounds are available but not using or during a normal install. Sheesh !
Added an extra line to BldHelper-release.sh to add the date of the released ISO in the checksum.
It wasn't much and adds 37 bytes to the file size taking the checksum from 153 bytes to 190 bytes.
2022-02-10
In the testing builds - I changed a variable for the repo from bullseye to stable.
The build went through as hoped. Boots and all works correctly as it should .
Changed the sources.list and 99pin-unstable to relect testing (50) unstable (10).
I do get a warning about apt expected bullseye-updates but got back stable-updates.
Everything other than that notification works. pinning, updates, installls ...
Very useable.
Added bluez and bluez-firmware to improve functionality for BT users.
RWC has mentioned inputattatch for Iphone devices. It doesn't look big or have large dependencies. OK
Gearing up for the next release. Still working on the robo-updater. Parts are working correctly.
Went a week without incremental updates to the Production repo handled it in record time compared to the same, by hand, a little each night.
The build scripts in the Dev repo are going to HAVE to be manually updated. not much getting around that.
Once the RU can spot a difference it will move the Dev file to Production. But moving changes from the testing to nightly to the release builds hasn't proven to be scriptable.
The pinning is absolutely correct.
Been waiting for edits to the hub and actions source file to come in. Life gets in the way sometimes and wasn't back till hours after they arrived.
No time to review or test, they were pushed through.
Found a build log from the Debian XFCE4 project, takes some 'gleaning' to pull info from it but seems we are on similar track.
Theirs seems to be using 'make' where ours uses shell scripting. Potato Patoto .
Not as bad as finding Qt options for calamares from a hexdump of the binary .
2022-02-08
The misconfigured pinning has been resolved by adding a colon (:) as needed in the correct location.
For the LUKS issue . I'll be pulling down a src copy to read, looking for comments there from the author.
Looking forward to seeing a few extra additions to the welcome screen.
A couple of touch ups there should have 3 search engines added in the "Links" area at the bottom.
Also backgrounding the web apps keeps from blocking the other apps available on the welcome page.
Listening to and watching the reviews from the initial release gives insight to where their eyes fall, what they are seeing/looking for and where.
They are familiar with the XFCE4 desktop already and looking for new baubbles, gadgets, gizmos and shiny things there.
Otherwise their over-all attention seems lost quickly, not taking time to do much more than "kick the tires". CI|CD we'll stay in the forefront for delivery there.
Now that the overall pace has slowed, dig deeper into the back-logs.
We have tons of X-apps installed that aren't showing up in the menus.
Yeah, ok , they aren't the prettiest (xcalc) but they _are_ there and available but gone unnoticed and are the basis for better looking apps.
maybe do something like - for i in $(busybox --list); do [ -x $(which $i) ] && ls -l $(which $i) || echo "The app $i is not in our path." ; done
Submitted a newer peppackages for review with 3 addditional files to include DDG, Yandex and Metager search engines at the bottom.
I didn't have any 20x20 pngs for the icons for them. And mentioned this.
The changes are minor but stop the app from blocking new instances of the additional features.
Blocking them doesn't stop them from being activated. they all go into a queue causing them to act similar to pop-ups coming from nowhere.
In the older version this is demonstrated by starting one app then trying to oppen different apps.
repetive clicking any of the buttons causes that one to restart or others to restart over and over.
2022-02-07
Lasts nights build to test encryped install runs through without errors.
Reboots are as normal , using UEFI , grub config has the calls to load crypto modules, they are present.
The decryption password is asked by grub on boot, boot process starts and times out trying to access the root partition.
The partition can be accessed from another installed OS as well as from the live-session.
It's not errors with the encryption nor the password. The problem is in the initramfs.
Been thinking about stream lining parts of the build by adding 2 more directories AFTER the 2022-02-10 build.
These 2 directories, pepnightly and peptesting, will have pep* sudirectories containing files unique from the release.
2022-02-06
Referring back to 2022-02-03 unstable has been commented out of /usr/sbin/sources-final.
Pinning in /etc/apt/preferences.d.99pin-unstable has been commented out of the PepBld-*.sh scripts.
A bonafide concern has came in about installing with encryption. Basically "Not Working" is accurate.
Grub is asking for a boot password. Good so far.
Grub menu Selection comes up as expected. But the boot process stops while trying to decrypt the partition.
Drops to a root initramfs prompt.
Looking for missing depenencies points to cryptsetup-initramfs & cryptsetup not being in the initrd.
https://live-team.pages.debian.net/live-manual/html/live-manual/customizing-run-time-behaviours.en.html#588
echo "cryptsetup" > config/package-lists/encryption.list.chroot
2022-02-07 Looking at previous L-B log files, cryptsetup and cryptsetup-initramfs are and have been getting into the build.
but don't seem to be appearing in the initrd for use during booting.
I may hove to extensively go through build logs as well as debug files for the installer to see where they're getting dropped.
2022-02-06 AM
It's coming to notice that some of apps expected to be current in the stable repo are not getting updated.
FF-esr seems to be high on the list for some.
With reports of ver. 91 being in old-stable but not stable sounds like it was removed by Debian or in the middle of a "re-fit". Just our luck.
From the installed OS, Look into how pinning in being handled for this.
It seems odd that the Live-session isn't behaving here as expected. But then they _are_ 2 different animals.
Noticed also was the lack of SPM in the hub. Odd that it was seemingly defined and expected to be there.
A quick fix there should have the button location brought out to the GUI.
Installing to an encrypted partition has not gone as well as expected for those that use it.
Seems as though either a depenency is missing from the live session , not being installed , or it is removed during the install.
2022-02-05 Ks
With the release behind us , I want to spend a bit of time to "soften" the wording in the boot config filies.
Change "Recovery" to "Direct Boot" and have seperate menues for sda sdb nvme01 mmc0 and mmc1.
Then trim the padding from the file(s) as most is now unwarranted
2022-02-06 PM Also to add a timeout before booting to a default.
Devuan is next up on the list. Coming in at the tail of the Pep-amd64 project I didn't have much time to really take in the entire process.
Thankfully we managed and What is there , none of us should feel as though any part of it requires defending.
Having some, recently invited, fresh eyes on the project should help things.
There will always be someones pet app that didn't go in the build.
It's expected to happen. Yes, Peppermint has changed.
Otherwise it would have been Pep-10 Re-respin.
We brought some of the classic looks and brought with us new innovation.
2022-02-04 Ks
Adding an ampersand (&) to the end of nearly all of the commands in the welcome window allows more than one child window to be opened.
example:
def rls ():
# pwel.destroy()
os.system('python3 /opt/pypep/release.py &')
The exceptions to this is for peppackages.py and for hub.py .
These 2 apps import singleton to allow only a single instance.
Without the "&" the hub and the the others will block the other apps, til the first one is closed.
With the "&" it will allow multiple instances of each (except hub.py and peppackages.py) and also no longer blocks the other apps from starting right away.
Also, an error message that python3 was anable to raise the window for hub.py and another error for peppackages.py .
Second of notice was , adding "&" to the pks command breaks some retries opening peppackages.py .
Logging out and back in corrects this, temporaily. This app is now "stand alone" with python3 in the bang-line.
More trial and error testing this without "python3" folowing pkexec is warranted.
Mentions have been made about MeWe not getting attention in the Welcome screen.
Another "nice feature" in the bottom row would be to have one to open a WebSearch to metager.org or duckduckgo.com.
Really cool would be if it had an address bar.
This would allow users to use the agnostic browser to find and DL their browser of choice, not in the debian repos.
22-02-03 Ks
Had a ZDNET article pointed out about a "serious security vunerability" using pkexec .
After aplying their recommended patch, this broke more than it fixed as the set UID bit for root is REQUIRED.
This patch was immediately removed.
Unpinned the stable repo from 900 to match Security and Bullseye-Updates at 500 .
Left unstable pinned to low priority (10)
Added --security true \ to the testing builds. This should keep things curent as well as allow Debian to manage taking care of the pwnkit for pkexec.
--backports true \ was also a recent addition, for Nvidia drivers.
2022-02-02 Ks
We released to the public. Not the frantic Grand Re-Opening I'd expected. But, it is building.
At least the pirates aren't killing the tourists and the Dinosaurs aren't eating them either. yay.
2022-02-01 Ks
With no clear route back to the Welcome window, with the exception of pge, "pwel.destroy()" has been commented out from welcome*.py.
This keeps the Welcome Window alive as the parent process.
The exception to this was for the pge command which does allow the parent to close , in favor of the Extras.
Closing these with the button, instead of using the window manager decorations, allows these to return to the main_loop.
Check also if the min, resize & maximize window decoration does the same.
2022-02-06 PM Closing the task by using the window decorations does NOT allow pge.py to return to the Welcome screen.
2021-01-02
Plenty to update but not sure where to start.
Calamares - To me, the look has improved considereably. Still a few places to change.
- font brightness +/- depending on the background intensity.
- buttons to resize and change their color (on hover).
- width of some dropdowns should be decreased.
- color change in the backgrond for tool tips.
- tweak the size and aspect ratio for the slides to better match that area.
- darken or lighten the intensity of some "icons" GPT/MBR in partittion selection, "About" and "Finished"
32) This really should have been started as a Journaled Log with dates and times.
Then add to the journal references to the section here more as a ToDo at the top and a DoneList towards the bottom.
A full rewrite here is in order. The journal should contain the thoughts and actions for the day , referencing entries for when the thought began.
For completness, retain this original Notes file which began in early to mid-November of 2021 and move forward to a journal style log.
31) This seems like as good of a time and place to start a bulleted list of minor things in the glib-2.0 directory.
Things more associated with "the feel", as opposed to "the look". Small item I have been working towards as this has progressed.
- set peplive password to "blank" - # This must be rooted out in a live-boot or live-config setting, found on the ISO but inserted during builds.
- keyboard numlock = on - Done
- remember numlock state = true - Done
- remember session = false - Done
- Alps touchpad edgescroll = true - w.i.p .... look in /usr/share/glib2.0 for these in gsettings.
- Alps touchpad taps = true - w.i.p .... my laptop has this issue ~/.config/xfce4/xfconf/xfce-perchan*/pointers.xml
- Synaptic touchpad edgescroll = true - w.i.p .... look in /usr/share/glib2.0 for these in gsettings.
- Synaptic touchpad taps = true - w.i.p .... Dustins laptop has this issue ~/.config/xfce4/xfconf/xfce-perchan*/pointers.xml
- touchpad 2 fingered scroll = true - Done
- nemo right clicks to open terminal - Done * This setting changes during OS install. Continue to rely on a symlink for this.
- thunar open maximized = false - Done
- thunar on open dimensions = 724x400 - Done
- firmware-ralink for my MT7601U wireless - # Misconfigred RTL8188CE module, breaks WiFi till RTL8198uu module is unloaded.
- firmware-mediatek for my RTL8188CE - # Not finding the correct module and uses module RTL8198uu instead.
- firmware-iwlwifi for intel wirelesscards - Done
-
-
-
- - - - - - -
30) In an attempt to speed up the build process and lower the bandwidth used, I have been (mostly) able to take care of stale mountpoints in the chroot(s).
Leaving the ./fusato/cache directory in place and deleteing ALL other files and directories around it lowers local BW usage by 300 GB/mo .
This also reduces the time required, during builds, to fetch 800+MB of .deb packages.
On my slow connection this saves me 30 - 45 minutes for each build.
Builds are disk intensive. Compared to using a spinning disk, using a SSD drive would cut local build times to under 30 minutes.
29) During a build ,`lb`creates and uses mountpoints in the chroot(s) in fusato.
If a build fails it leaves these set and causes problems that have plagued the build process, producing inconsistent ISO's in the process.
The command `lb clean` probably takes care of these as well as removing old files from the last build.
Piping the output of
`mount` => `tac` => `grep "${PWD}/fusato/chroot"` => `cut -f3 -d" "` => `umount`
takes care of unmounting them is the opposite order as when they were mounted by a previouly failed build.
Starting a build with "--clean \" may also do something after the obvious files and directories are revoved.
This means, after a failed build (locally or on the remote server) no more manual intervention for this before the next build starts.
28) New Years Day 2022 - I found/made time to start working ahead a little bit.
I Started woring (locally) on a separate build that doesn't rely on "--apt-recommends true".
And I thought I was having a hard time getting things right for online/offline grub and grub-efi installs.
It is progressing. I've gotten ahead of myself trying to build the live session without having the bootstrap setup correctly.
By not having the bootsrap built correctly, with `live-boot` included, `lb` isn't creating any user accounts to log in to, effectively.
I can still get into the bootstrap with "init=/bin/sh" in the boot configs for the ISO .
Jan 13, 2022 , I added live-boot to the unstable build to see if this creates a user account used to log into the live-session. - Nope
27) While on the welcome window , be sure to check whether the manual one won't open while the auto version is running and vise-versa.
While both can be opened at the same time, it is unlikely to happen.
Changes made to welcome.py and welcome_man.py causes welcome.py close with the PepGetExtras (pge) and reopens welcome_man.py .
26) The startup Welcome Window still "destroys" itself when anything there is activated.
The devs know and anyone familiar with the OS will find the one in Favorites.
New users won't know how to get it back. This will be a "point of frustration" for them and something for the YT naysayers to pick apart.
25) The Debian update from 12-18-21 has introduced a few unforeseen things.
First it was the kernel version change which was found the same day when the ISO's failed to boot.
Then, while trying to install FF the install would hang with only minor warnings being shown.
This is on "the mends" with a temporary solution going into place and a direction in mind to keep this showing up again.
Possibly adding FF-ESR as a second choice to FF but then Where do you start and when do you stop ... ?
**** Looks like the fix, for now, was to bring the FF-unstable install out to a terminal window to interact with the user.
I need to revist redirecting the IO for terminal apps to VT's. Ahhh the "Good ol' Days" from WirelessKs.net with apps displaying on 30 VT's.
Getting the I/O from `dialog` to show and respond in the output box INSIDE the installer window would look cleaner.
Look into seeing where the display box is getting redirected from.
* It is a "pipe" where the `stdout` is piped out but nothing to redirect `stdin` back to there.
All of the tty's, except where a normal user is logged in, are owned by root and only read permissions are given to members of the "tty" group.
The code in peppackages.py mentions tkinter-terminal. If this terminal opens a tty in /dev/pts/* instead of piping the output to it, run APT in it.
Other wise it might take some creativity to open a terminal in a diffent desktop,
then to use redirects such as `htop > /dev/pts/0 < /dev/pts/0 & ` . Can the same be done for `APT` ?
This should cause apt to write to AND read from the tty opened by tkinter-terminal. Will apt use it for a fallback to interact with the user ?
* How do I efffectively communicate this to the devs working in that area ?!
Backing up to FF-ESR solved this. Later, I'll still have a go looking into tkinter-terminal. redirects to/from the OS Might be useful at somepoint.
24) Had a snafu collision between ICE working and wrestling a workaround for the snapstore into this.
It seems resolved. Further testing is required. the fix was put in ./pephooks/0600-*symlinks*
Jan 13, 2022; Ice is working and no negative reports about the snap store. No news is good news.
23) It is also time to start thinking about automating the process of updating files from the dev repo to the production repo.
This shouldn't be that hard to implement. This is just up `rsyns`'s alley.
Check to be cure the creation times are intact when using `git`.
This is to see if ctime is when they appear locally or at the time they were created on the remote users system.
As long as no one creates files in the production area, automation here should succeed.
Tommy mentioned automated code checking tools to go into the dev directory. Should be doable.
22) With automated builds running smoothly for the last month, we hit a snag while trying to run them with the time overlapped.
The first build to start failed while waiting for an ISO to show up. the second build finished (roughly) on time.
It is time to implement a locking file and detection. So if a build does run over the expected time the next build won't destroy it.
This will tell subsequent instances to pause and wait for the first one to finish before continueing.
Jan 13, 2022; this still needs done. And will be even more necessary when we add extra flavors.
21) Earlier it struck me, Now that the base is stable and working, developing for this would go a lot faster if ....
I were to install the OS to a decent drive IN the machine I'm working from. This way, I am right there IN the OS.
OhMeeGawd ! How did I not see it sooner !
Install the OS with a home part on the disk - WITHOUT my current home partition attatched.
During the install create a normal user account.
Copy the requred files from the old $HOME_PART to their/my new ${HOME} and Presto !!!
Normal cavaets apply regarding protecting the home partition. Take great care not to wipe that out, Keep Back Ups !!!
*** Now that persistence is working with our ISO, This just got even easier.
20) Current discussions were on the topic of "Which default browser to use".
It's been handed down that - NO Web Broweser will be installed."
This is on the premise of "browser choice is a matter of choice left up to user (person in the seat)."
OK. Seems harsh. But, "not my baby". I'm in the delivery department not the conception department.
*** Tommy has something is the works that is looking EXTREMELY promising for use in place of an "x-www-browser".
I seen the current results. I'm thoroughly impressed. Pointing /etc/alternative/x-www-browser at it negates the need for a named browser.
It's resizable, has an address bar, goes forwards and backwards, will DL files, plays YT video and sound is working.
Getting it to accept a URL from a command line and possibly the window size on open makes this a drop in replacement.
19) My testing builds on the build server are running again.
Or so I'll find out tonight. Dec 5, 2021
It might not have needed the PATH variable set.
Having it set for `cron` won't hurt though.
18) We started with '--hdd-label "Peppermint" \' in the config portion.
We are building an ISO not a HDD image. This isn't PenDriveLinux.*
This was removed and commented out in my script, with a note.
That I can tell, Removing it hasn't seemed to affect anything.
I see no changes usings `strings -td $the_file.iso` .
* (PenDriveLinux - Hmmmm Ideas ! A working full featured desktop in under 500 MB.)
17) After setting the config vaiable "--apt-recommends true" to "false", one of the hooks scripts spit an error.
( See #29 ) Not just an error. ANY errors in the hook scripts causes the build to fail abruptly, leaving stale mountpoints in the chroot.
The error was a missing directory (or variable) where, I guess, files were to be written to (or used) during the rest of the build.
This was bad-Bad-BAD. The build crashed at 9 minutes into the run, each time.
Reading the local file /usr/lib/live/build/config has been a both a great help to trim and to break local builds.
Also, because --apt-recommends was set to 'false' `sudo wasn't installed which called some thing in 0520 and 0540 hooks to fail.
`live-build` runs as root and has no need for `sudo`. Without the --apt-recommends auto installing the kitchen sink,
the trimmed versions will need extras listed explicitly. - Choices -
Also in hooks/normal/0540- there is some odd bit there about adding a $Desktop in /root and setting the icon for the root user.
16) The pepbld-log.out shows needless time spent running 'zsync' on the ISO.
By adding "--zsync false \" in the config portion of pepbld.sh this should stop.
Its removal has shaved the local build time to 1 hour 15 minutes. (note to self - install that SSD in place of the Sandisk USB for dev desktop)
Further time saving was found by offloading the logfilie to /tmp to stop blocking R/W's for the build.
This helped locally. Not much I can do in the remote serever.
Jan 5, 2022; Another big improvement was to do writes in ./fusato to a disk on a seperate channel from where reads are done.
More gains found again by deleting ./fusato/cache/bootstrap as well as everything else in fusato EXCEPT ./fusato/cache .
This is where the .deb packages are held , as well as the InRelease and Packages.gz files for the local repository.
ALSO I unpacked the Nov 11, 2021 tarfile, from before the Debian version change in Dec, and ran a new build from there with the old packages.
As hoped, Live-Build recognized the old files and only DL'ed the files that have changed in Bullseye.
Even better was that Live-Build took care of removing old packages after the new ones were in place. Win - Win !!!
Local build times are now under 38 minutes for a 1.5 GB ISO. What HAVE I done with that SSD ?
Hmmmm , USB-3 Raid-0 is do-able and should hit the upper limit of the host transfer speeds.
15) Ran into a problem with running `./pepbld.sh` as a cronjob.
It seems the output of `lb` might need to be attatched to a terminal to run correctly. Heh ! tee to the rescue.
cronjob: runs a helper scrip and the helper pipes the outputs to a file.
`tee` couldn't care less that `stdout` gets blackholed or gets written to /dev/null.
Sending the output to /dev/null seems better than it showing up in /dev/console.
From ${working_directory}, I'll try changing the pepbld.sh line to "`./pepbld.sh | tee ../pepbld-log.out >/dev/null`"
This way the logfile will show up outside ./pepdistroconfigs in KSTestRepo .
Cronjobs run with the power of the user but not with their full environment.
We haven't been setting the PATH variable in the scripts (early on I wondered why not).
The scripts have been getting called from the CLI using `sudo`, using this does set our PATH.
But since we are calling sudo in the crobjob the script errors/exits the first time it encounters a command NOT in the cron user's path.
*** The addition of "PATH="/sbin:/usr/sbin:/usr/local/sbin:$PATH" on the line immediately following #!/bin/bash will make the scrips more portable.
14) I must have missed a memo that the ISO is getting trimmed and shrunk to about 1GB, or less.
The idea for this seem to be partly due to "choice" the end user has the "choice" to add what they want.
The end user might know what choices are available. Seemingly , more than Half of windows users cant find the Control Panel.
A new user or windows convert is likely to give up and go elsewhere since they won't know what choices are available.
They won't know that synaptic is a package manager or even how to use a CLI text editor, much less able to run `sudo apt install`.
Maybe I'm missing something That perhaps there is the plab(???) of a sleek racer inmind for one set of ISOs.
And another ISO to appeal to the new enthusiasts. Pep 10 was done very nicely with just 1.6GB. Even At 1.4 it doesn't seem lacking.
13) I been noting the location of lint files not needed by the Live-Session or the installed OS.
Getting these cleaned up, before final release, will help to trim the iso size.
*** This requires a or a few 'hook' called in ./config/hooks(/live and-or /normal) to run during the install - for each flavor.
This '095X-lint-removal-hook' needs written to be portable. So each of the builder scripts can re-use it.
Something along the lines of `for $file in \ list-by_path \; do [ -e $file ] && rm -rf $file; done` .
If it exists, whether it is a directory or a file, it gets removed and moves on with no errors to cause the build to fail.
This is a clean-up only. Not to be used for trimming particular flavours.
Additions should be done by listing key additions in the builder scripts.
12) An added benefit to the seperate trees is `cron` can call MasterBuilder.sh in the root of the worling directory.
From there , MasterBuilder.sh sequentially call each of the ./Builders/${FLAVOUR}.sh scripts for each build.
The ./MasterBuilder.sh take care of the house keeping deleting the old files, before renaming and putting new ones into place in /var/www/html/*/* .
**** This is all worked out. By using a BuildHelper-*.sh and a PepBld-*.sh for each build, pepbld.sh is only used for the weekly builds.
This does require a symlink from PepBld-release.sh => pepbld.sh. And keeping 2-3 more (PepBld-*.sh) files similar to pepbld.sh.
BUT this preserves the beauty of the C.D. nature. Each build has a unique build script from the release version(s).
After daily or weekly modifications are found to work well, then pepbld.sh can be edited to incorporate the changes.
This is exactly like having 4 repos all rolled into just one.
***This should roll together PERFECTLY when we start an x86 or Devuan build. All that's needed is another unique PepBld-*.sh and BuildHelper-*.sh.
From there one more line in the crontab and we are building a new "flavor" or set of flavors.
11) Been working on shifting from all the files spread out flat to organizing them in an actual tree.
A quick edit of the pepbld-orig.sh script gave me what I needed for mine.
I'll probably write a script to read pepbld.sh in the other repos that will write a script for their files.
The idea behind this is to have a common tree of things like wallpapers, icons and default config files.
THEN have several skinny trees with the specifics for a certain type or flavour of Peppermint.
Some discussion has gone on about doing more than an amd64 and x86 Debian distro.
This will allow the team to _more_easily_ support additions of Devuan and possibly ubuntu.
Another benefit is, the possibility of an easier model for rolling releases of each flavour.
A single change in the common or parent tree will immediately show up in the others as well.
Each of the build scripts will have it's own list of packages to be installed.
The common tree can be used to pick from various files & folders or backgrounds for a full, sleek or stripped model.
***Actually, the current method is just as modular and pluggable. Changing the format only means less typing for one build.
The only real benefit is it is already in a tree format.
Each of the different builds/flavors can have their own "skinny tree" with their specific config files that can be copied in place overwriting files in the main tree.
**** In the long run, having the files spead out is more like picking files off of a shelf , all in one store.
Rather than going to several orchards for a piece of fruit. The build scripts will detail what goes into each recipe.
10) The files /isolinux/live.cfg and /boot/grub/grub.cfg are close. They get the job done and look good during boot-up..
With more than just /dev/sd[a-z] now, might include /dev/nvme0[1-9] and /dev/mmc?? , each in sub-menus.
***Closer inspection of the sys/isolinux.cfg files shows a stdmenu.cfg where I should be sourcing the recovery options.
Lets get this written up, included with the builds and then sourced from stdmenu.cfg with the others.
Then go back to /boot/brub/grub.cfg to do the same. :) Modules, not monoliths. ( see #11 )
9) I have a bash_aliases file in ./pepaliases that will get trimmed. Most people in the seat won't need/use/know what is in there.
They might use linux for years ... and never know ~/.bash_aliases exists. :D
Noticed that the line for bash_aliases was segregated from the others.
It goes from being a visible file in our folders to being a hidden file in the live-session/OS .
Consider trimming the one for release further now that there is also a larger list of applets in the -nightly and -testing builds.
Cavy was informed of the dangerous applet in the testing build used to quickly recreate the partitions needed for the Dev Desktop for my local builds.
I have warned others to stick with the release and nightly builds. * We'll see how well they listen. *
8) I feel pretty confident the snapshot from Main on 11-27-21 is going to work. I went through it by hand.
There were only a few permissions that required changing. Most notably was 'install-peppermint' being copied to /usr/bin and some of the .desktop files. .
**** Created a pair of tools to recursively look for and report directories and files set incorrectly.
Keeping in mind that some .desktop files require being +x , images and config files do not.
Still seeing a few trickle into our folders that need tweaking. Nothing to worry about, for now.
7) Reverting again to a newer tarfile used for successful builds, dated 27, November 2021.
A serious snafu happened when I "off-handedly" changed the permissions of files in pepdistroconfig then sent them upstream.
This is the reason for reverting again, using `git reset SomeLongStringFoundInTheLogs` would have saved time.
I'll have to unlearn old habits as I get more familiar with `git` and working in a larger collaboration than just me & myself.
**** Git-Foo is getting stronger. `git diff ${file_name}` is a _blessing_ to review changes before a commit/push .
6) I have been able to get Offline installs working by sending a list (one package per line) to installer.list.binary.
This is the location used by Live-Build to create /pool and /dists on the ISO for offline installs.
This list needs to include grub2-common, shim-signed and shim-unsigned needs to be here as well. ... will fill this in later.
In the chroot section - I have efibootmgr. The pkg grub-common is pulled in and grub2-common is in /pool .
In the installer is - grub-common, grub2-common, grub-efi-amd64, grub-efi-amd64-bin, grub-efi-amd64-signed, grub-efi-ia32-bin,
grub-pc-bin, libefiboot1, libefivar1, mokutil, shim-helpers-amd64-signed, shim-signed, shim-signed-common, shim-unsigned & ucf .
The packages grub-efi and grub-pc are conflicting and have been removed from the installer section to be saved in /pool .
****See notes in section 5 about using `grub-cloud-amd64` and `grub-legacy`.
And the sources.list, for the live-session <== THIS was where the problem began, again. And buggy bios in a test machine. See 5 above.
5) After more that a week solid of trying to find the problem with grub on bare-metal and neglecting all else I am giving up.
I'm reverting to the tarfile used for successful build downloaded from Codeberg dated 14, November 2021.
This doesn't have the styling and improvements from @grafiksinc and @jonesy. But clamares is installing grub correctly.
**** Jan 12, 2022; Off and on for the last 2.5 months, `grub` has been the bain of my existance.
This was FINALLY cured by COMPLETELY removeing all instances of grub* from the packages list.
Using `grub-cloud-amd64` in the installer lists was an *O*M*G* breath of fresh air and burden removed.
When it comes time to start an x86/i686 build, Debian also provides the `grub-legacy` package` , just for this.
**** And now it isn't pulling in grub-pc or grub common for the offline installer. Just include the dependencies and co-depends for grub-cloud-amd64
I went back to re re-edit all of the build scripts for the full list of dependencies of grub-cloud-amd64 in the installer sections.
ALSO - Note that BFI'ing a sources.list into the builds breaks things. It requires finesse. (BFI = Brute Force)
* It wasn't broken anymore. Stop trying to fix or improve this section. Months of grief could have been better spent elsewhere.
- "Grub-common breaks grub-cloud-amd64". Something from '--apt-recommends true' is pulling grub-common in, as it's installed in the chroot.
- I feel somebody's pain as they were going through this before now.
- As it turns out, it is a compatability problem with the machine I was testing offline installs on.
### This whole setcion needs a rewrite. The problems I was having were more to do with the buggy BIOS than with the current builds.
4) Found a new problem no Pep testers found or commented on.
During "Install Alongside", Indecisively selecting the partition to use crashes the installer on the fourth click.
No warnings are given , no errors to the screen or to the install log. Maybe in /root/.config/calamares/session.log .
"... don't use this option." ???!!!??? Maybe not, but it _needs_ tested .... Check the Debian USB for the same bug.
3) During the install, Calamares is expecting `smartctl` to be in its path. It isn't in the binary.
I'm not certain (yet) whether it needs to be installed in the live-session or the chroot.
It's not fatal, the installer logs mildly complain about it not being there.
Add it to the installed packages in the binary to see.
Added it to the list in chroot.list.packages .
2) Doing an install with `toram` crashes with an "out of space" screen error at about 86% completed.
Further looking in the log shows /dev/shm is mounted to /dev/sdb1 (the ISO).
***This is supposed to be a tmpfs.***
1) **Why, oh Why** does the "Install Alongside" option rearrange the partitions order ?
The graphical depiction is correct but they are labeled out of order in the GPT.
This is going to break something for (OCD) people expecting the GPT tables to be in order.
I came in late for the testing phase. I started in UEFI beta. Check this behaviour using MBR in a Legacy install.
*** Let It Go ! ***
####################################
*) Persistence is still on a back burner as we work through Offline UEFI installs and the others take care of styling and Pepperminting.
Using ${HOME} persistence seems better currently. But ANY persistence causes the installer to fail at unpacking /live/filesystem.squashfs .
*** A thought came to mind to have a symlink from the location the installer expects it to be
pointing to the mounted FS. A regular mount , while in live session , should cover it up. This can go in configs/hooks/normal/0900-OS-LintRemoval
Nope Persistence is working now , as expected by removing stale files being maintained in peploadersplash.
*** I think I can close this one. But not the "toram" boot option. Review line 2 of this paragragh for checking persistence still.
The persistent partition is getting mounted 2x to the same directory. This is still better than when we were re-using the old EFI files in /peploadersplash/boot/efi .
Using persitence as an overlay to / was horribly slow.
Having ONLY /home mounted there was quite usable.
Adding /opt to the partition, with /home, to be able to install "foreign" applications (google-chrome) was not "great".
Manually unmounting the second instance where the persistence partition holds both /home and /opt makes this much easier to use.
Still not great , but 'marginally' acceptable. though not ready for prime-time.
I did add a few bits to the testing build.
Just after the ISO is created and before the sha512 checksum gets written, I have added a 4rd partition , outside of the ISO .
Normal tools don't allow this. But then `dd` wouldn't ordinarily be used as a partitioning tool either.
Says the one guy that uses "Disk-Destroyer" to edit boot files on a USB or ISO image.
./pephooks/540.... :
*** Skip to last paragraph ***
<strike>
I'm thinking, for calamares installer, these debs only need to be available for install
Their location would be in ${binary}/pool/main/[e,g,l,m,s]
_In Debian_, The Packages files that point to them are in the binary, in /live/filesystem.squashfs .
./var/lib/apt/lists/local-mirror.cdbuilder.debian.org_debian_dists_bullseye_contrib_binary-amd64_Packages
./var/lib/apt/lists/local-mirror.cdbuilder.debian.org_debian_dists_bullseye_main_binary-amd64_Packages
./var/lib/apt/lists/local-mirror.cdbuilder.debian.org_debian_dists_bullseye_non-free_binary-amd64_Packages
In Pep11 , these files are
./var/lib/apt/lists/deb.debian.org_debian_dists_bullseye-updates_main_binary-amd64_Packages
./var/lib/apt/lists/deb.debian.org_debian_dists_bullseye_contrib_binary-amd64_Packages
./var/lib/apt/lists/deb.debian.org_debian_dists_bullseye_main_binary-amd64_Packages
./var/lib/apt/lists/deb.debian.org_debian_dists_bullseye_non-free_binary-amd64_Packages
notice the difference between lists/local-mirror.cdbuilder and lists/deb.debian.org
I'll make a list of the packages ( ! installed to the squashfs) for Pep11 CORE and Pep11 FULL to populate these.
This should also let us have the dependency package from Sid needed for _____(?)_____
It would go in the binary at /pool/main/unstable instead of /pool/main/bullseye
In Debian the corresponding entry is - Filename: pool/main/u/util-linux/...... It is local.
EUREKA !!! We should just need to create the directory ./pepdistroconfigs/pepbinarypool .
Then set about populationg it with packages we want for the UEFI install and both CORE and FULL.
Then `Live-Build` should take care of making ${binary}/pool the local repository for calamares installer.
To test this I am going to grab /pool/* from the debian disk, insert it to the new ./pepbinarypool dir
and add a line in pepbld.sh to copy the contents correctly into the $fusato/binary/pool tree.
</strike>
<strike>
Our Live-Build and the subsequent binary does NOT know to do anything with these files.
The debian binary does NOT list these as a location to use.
*BUT* during the process of the Live-Session booting, it has a routine that DOES know.
Until this is found (look in their hooks & initrd) we'll just grab their file from in the Live-Session at /apt/sources.d/Debian*.list
It is only a single line, pointing to /pool, that we can append to /apt/etc/sources.list .
*** It's created in Debians sources-media module. *** Is ours different ? or not used ?
The directory ./peploadersplash/pool/main already exists.
Adding ./pepbinarypool is redundant on my part.
Edit my entry in ./pepbld.sh to include what was there waiting for me.
We already have "pool" in place , just need to use it.
<s>For offline, Find and use the hook "sources-media" and the related hooks correctly. </s>
</strike>
***
What IS waiting for us is the inclusion of a 'new line' deliminated list at installer.list.binary to be used by `lb`, not " " deliminated.
Later I did figure out that it could be a space deliminated list. I'd made 2 changes at the same time and accounted success to both.
That's ok though. a new-line seperated list is much easier to read and keep track of during edits or visually scanning for typos.
Live-Build uses installer.list.binary to create /pool and /dists in the ISO and calamares uses these files for Offline installs.
Between this and the inclusion of 'grub-pc' to packages.list.binary file this was the key to getting Offline UEFI installs working.
**** This is where I took out everything including all grub and grub related entries in both this list and the packages list, replacing them all with "grub-cloud-amd64" in JUST the installer list.
..... and now it isn't pulling in grub-pc or grub common for the offline installer.
I went back to re re-edit all of the build scripts for the full list of dependencies of grub-cloud-amd64 in the installer sections.
Using grub-cloud-amd64 does NOT work as expected. The calamares package lists grub-common as a dependency.
The package grub-common breaks the meta-package grub-cloud-amd64.
It mght be possible to use grub-cloud-amd64 in the packages list THEN drop in the calamares.deb so the dependency on grub-common is already met by grub-cloud-amd64.
This could open up the posibility of not needing separate ISO's for x86 and amd64 builds.
The combined ISO would be larger than either current ISO. But , because of so many shared files it wouldn't be 2x the size.
Or will `lb` spit out 2 ISO's and take 2X as long ?
If it didn't take 2X as long AND it successfully created 2 ISO's .... I haven't taken this into account during future proofing my server scripts.
no defined area
##############################
Check this during the Pep11 installer run, from inside the CHROOT .
From the "Packages" file describing the package efibootmgr:
"Note: efibootmgr requires that the kernel module efivars be loaded prior to use. 'modprobe efivars' should do the trick if it does not automatically load"
*** It's mounted correctly in the live-session as well as in the CHROOT. ****
This is for later.
###############################
Still not sure why persistence with Pep11 gets such a kick in the head.
The filesystems are getting mounted 2X instead of just once.
When doing an install with `toram`, /dev/shm gets mounted to the Read ONLY ISOfs at /dev/sdb1.
This is why the installer "runs out of space", exiting early when running `mkinitfs` .
This is supposed to be a tmpfs with RW.
*** slight mentions of this bug in other groups points to the problem as having "findiso=" in the boot config entry for the "toram" option.
One other thing slips my memory - review conversations with @cavy to jog my memory. It was the lint files.
Missing files in the lint removal section causes builds to fail at 9 minutes in.
Around 2022-01-07 I added error checking for in the script for these files.
I also added a commented list of files NOT to remove and adding to the list.
Removal of some files has had unexpected consequences in unexpected places.
Cavy's working knowledge of the GUI and retention of various information has been extremely valuable.
Having him on the team has been an asset I would dearly miss were his skillsets not been available.
The CLI might not be where he shines, yet mine are not remarkable in the GUI.
We have managed well together and the results are quite favorable. ++