<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[# journalctl -xeu "Sakuragawa"]]></title><description><![CDATA[Discover technologies and live on the edge]]></description><link>https://blog.sakuragawa.moe/</link><generator>Ghost 5.20</generator><lastBuildDate>Thu, 09 Apr 2026 05:07:03 GMT</lastBuildDate><atom:link href="https://blog.sakuragawa.moe/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Working on Fedora Sway Spin]]></title><description><![CDATA[Fedora now has a Sway Spin that comes with Sway, an i3-compatible Wayland compositor, preinstalled and configured.]]></description><link>https://blog.sakuragawa.moe/working-on-fedora-sway-spin/</link><guid isPermaLink="false">66a1193f07183900015951c2</guid><category><![CDATA[Linux]]></category><category><![CDATA[Quick Reference Handbook]]></category><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Sat, 20 Jul 2024 12:55:00 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2024/07/screenshot-sway.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.sakuragawa.moe/content/images/2024/07/screenshot-sway.png" alt="Working on Fedora Sway Spin"><p>Sway is an i3-compatible Wayland compositor. It is designed to be a drop-in replacement for i3 window manager, but works under Wayland environment.</p><h2 id="installation">Installation</h2><p>Fedora now has a Sway Spin that comes with sway preinstalled and configured.<br>Alternatively, it could be installed on Fedora Workstation and other spins.</p><pre><code class="language-shell">sudo dnf install &quot;@Sway Desktop&quot;
sudo dnf insatll &quot;@Sway Window Manager (supplemental packages)&quot; --allowerasing
</code></pre><h2 id="spin-bundled-components">Spin Bundled Components</h2><p>Fedora Sway Spin comes with multiple components, including:</p><ul><li><code>swaylock</code>: Simple screen locker.</li><li><code>waybar</code>: Highly customizable Wayland bar for Sway and Wlroots based compositors.</li><li><code>rofi</code>: Highly-configurable and lightweight notification daemon.</li><li><code>dunst</code>: Highly-configurable and lightweight notification daemon.</li><li><code>kanshi</code>: A Dynamic display configuration tool.</li></ul><h3 id="sway-window-manager">Sway Window Manager</h3><p>The default Sway configuration is at <code>/etc/sway/config</code>. On Fedora Sway Spin, it reads additional configurations from 3 locations:</p><ul><li><code>/usr/share/sawy/config.d</code></li><li><code>/etc/sway/config.d</code></li><li><code>~/.config/sway/config.d</code> (<code>$XDG_CONFIG_HOME/sway/config.d</code> to be precise)</li></ul><h3 id="kanshi">Kanshi</h3><p>Fedora Sway Spin bundles <code>kanshi</code>, which is a dynamic display configuration tool.</p><p>Its configuration file is placed at <code> ~/.config/kanshi/config</code>. Multiple profiles could be defined in the configuration, one of which will be enabled if all of the listed outputs are connected.</p><p>Adding the first profile could be very straight forward. List all outputs recognized by sway:</p><pre><code class="language-shell">swaymsg -t get_outputs
</code></pre><p>Take mine as an example, I have <code>eDP-1</code> which is the laptop internal display, along with <code>DP-1</code> and <code>DP-2</code> which are &#xA0;connected to my dock.<br>I am going disable the internal display and set the 2 external ones to side by side with scale:</p><pre><code>profile {
    output eDP-1 enable scale 1.5
}

profile {
  output eDP-1 disable 
  output DP-1 enable mode 3840x2160@60 position 0,0 scale 2.0
  output DP-2 enable mode 3840x2160@60 position 1920,0 scale 2.0
}
</code></pre><p>Note that I am using 2X scaling (<code>scale 2.0</code>), so the position should be <code>3840 / 2 = 1920</code>.<br>Finally let <em>Kanshi</em> start with <em>Sway</em>.<br>Add a Sway configuration segment to <code>~/.config/sway/config.d/10-kanshi.conf</code>:</p><pre><code class="language-conf">exec_always {
  pkill kanshi
  kanshi &amp;
}
</code></pre><h2 id="tweaks-and-customization">Tweaks and Customization</h2><h3 id="override-default-terminal-emulator">Override Default Terminal Emulator</h3><p>I personally perfer <code>Terminator</code> over the default <code>foot</code>. So I added a new segmented configuration file at <code>~/.config/sway/config.d/10-preferred-applications.conf</code>:</p><pre><code>set $term terminator
bindsym --no-warn $mod+Return exec $term
</code></pre><h3 id="map-caps-lock-to-control">Map Caps Lock to Control</h3><p>I personally more adopted to Unix keyboard layout, especially no Caps Lock but a left control.<br>Add a new configuration to <code>~/.config/sway/config.d/10-input-keyboard.conf</code>:</p><pre><code class="language-sway">input &quot;type:keyboard&quot; {
	xkb_options caps:ctrl_modifier
}
</code></pre><h3 id="touchpad-tweaks">Touchpad Tweaks</h3><p>By default, there is no natural scrolling and tap to click. Add a new configuration to <code>~/.config/sway/config.d/10-input-touchpad.conf</code>:</p><pre><code class="language-sway">input &quot;type:touchpad&quot; {
       dwt enabled
       tap enabled
       natural_scroll enabled
       middle_emulation enabled
}
</code></pre><h2 id="software-integration">Software Integration</h2><p>Most of the software work under Wayland or XWayland out of the box. But some of them still need fine-tuning.</p><h3 id="chromium">Chromium</h3><p>Chromium defaults to auto-select platform backend, which might lead to falling back to X. Visit <code>chrome://flags</code> and select <code>Wayland</code> for <code>#ozone-platform-hint</code> to indicate Chromium to use Wayland.</p><h3 id="electron-in-flatpak">Electron in Flatpak</h3><p>Some Flatpaks come with Wayland disabled by default because it might cause issues on non-HiDPI displays. When using a HiDPI monitor with scaling, however, this results in blurred user interface. To fix this, expose <code>wayland</code> socket to the application:</p><pre><code class="language-shell">flatpak override --socket=wayland &lt;APP_ID&gt;
</code></pre><p>Alternatively, this could also be done in a GUI way via <em><a href="https://github.com/tchx84/Flatseal">Featseal</a></em>.</p><h2 id="more-to-read">More to Read</h2><ul><li><a href="https://fedoraproject.org/spins/sway/">Fedora Sway Spin</a></li><li><a href="https://github.com/swaywm/sway/wiki">swaywm/sway Wiki</a></li><li><a href="https://github.com/swaywm/sway/wiki#Multihead">Multihead - swaywm/sway Wiki</a></li><li><a href="https://github.com/swaywm/sway/wiki#keyboard-layout">Keyboard Layout - swaywm/sway Wiki</a></li><li><a href="https://github.com/swaywm/sway/wiki#clamshell-mode">Clamshell Mode - swaywm/sway Wiki</a></li><li><a href="https://sr.ht/~emersion/kanshi/">kanshi</a></li><li><a href="https://github.com/flathub/com.slack.Slack/issues/212&apos;">flathub/com.slack.Slack &#xA0;#212 Wayland issues on HiDPI</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Install FreeDOS - but Manually]]></title><description><![CDATA[FreeDOS is free DOS - not only as in free beer, but also as in free will. FreeDOS provides Live CD and USB installer, but how if one wants Live USB? We can install the transfer the system and install packages manually.]]></description><link>https://blog.sakuragawa.moe/install-freedos-but-manually/</link><guid isPermaLink="false">6391bfa5f8e4e00001e47503</guid><category><![CDATA[Getting Started]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Quick Reference Handbook]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Fri, 09 Dec 2022 01:05:47 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2022/12/starting_1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.sakuragawa.moe/content/images/2022/12/starting_1.png" alt="Install FreeDOS - but Manually"><p>FreeDOS is free DOS - not only as in free beer, but also as in free will. FreeDOS provides Live CD and USB installer, but how if one wants Live USB? Since FreeDOS is simply a DOS distribution which is a simple DOS kernel with additional programs shipped, we can install the transfer the system and install packages manually.</p><h2 id="prepare-the-usb-drive">Prepare the USB Drive</h2><p>This could be done in a traditional (and DOS) way:</p><pre><code class="language-bat">FDISK /AUTO 1
FORMAT /S
REM Assume the disk is C: drive
SYS C:</code></pre><p>But a quick way to prepare the USB drive is to use Rufus: select the drive and choose &quot;FreeDOS&quot; in the image. Then you are a go!</p><h2 id="install-the-base-system">Install the Base System</h2><p>At this point, the USB drive is already bootable (of course, not UEFI but under legacy mode). We&apos;ve already had a minimal system. Now, we are going to prepare a FreeDOS Live CD, which contains all the files we need. </p><h3 id="transfer-system-files">Transfer System Files</h3><p>Copy and rename <code>CDROM:\FREEDOS\BIN\KERNL386.SYS</code> to <code>\KERNEL.SYS</code>. This is the FreeDOS kernel.</p><p>The FreeDOS kernel will look for <code>FDCONFIG.SYS</code> to configure the system. Copy and rename <code>CDROM:\FDOS_X86\FREEDOS\CONFIGS\CONFIG.DEF</code> to <code>\FDCONFIG.SYS</code>. Open the file and replace the following variables:</p><ul><li><code>$FTARGET$</code> to <code>C:\FREEDOS</code></li><li><code>$FDRIVE$</code> to <code>C:</code></li><li><code>$FCCC$</code> to <code>001</code>, <code>$FCKC$</code> to <code>858</code> (or any other <a href="http://home.mnet-online.de/willybilly/fdhelp-internet/en/hhstndrd/base/cntrysys.htm">country code</a> you would like to set)</li></ul><p>And we also copy and rename <code>CDROM:\FDOS_X86\FREEDOS\CONFIGS\AUTOEXEC.DEF</code> to <code>\FDAUTO.BAT</code>. Then, again, replace:</p><ul><li><code>$FTARGET$</code> to <code>C:\FREEDOS</code></li><li><code>$FDRIVE$</code> to <code>C:</code></li><li><code>$FLANG$</code> to <code>EN</code></li><li><code>$TZ$</code> to <code>HKT</code> (or any time zone using abbreviation)</li><li><code>$LANGSET$</code> to <code>&quot;&quot;</code> (delete the line)</li></ul><h3 id="transfer-command-shell-and-package-manager">Transfer Command Shell and Package Manager</h3><p>Change directory to <code>CDROM:\FDOS_X86\FREEDOS\BIN</code>, copy <code>command.com</code>, <code>fdimples.exe</code>, <code>fdinst.exe</code> and <code>fdnpkg.conf</code> to <code>\FREEDOS\BIN</code>.</p><p>Now the base system is ready to use. There is a last step: copy <code>PACKAGES</code> folder from CD to <code>\</code>. Make sure you include all desired packages in the folder so that they could be installed later.</p><h2 id="install-software-packages">install Software Packages</h2><p>Reboot the computer. Boot into FreeDOS and Choose <code>5 - Load FreeDOS without drivers (Emergency Mode)</code>. Start the package manager:</p><pre><code class="language-bat">set TEMP=\TEMP
fdimples</code></pre><p>To install FreeDOS, mark &quot;<em><a href="https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/repositories/1.3/pkg-html/group-base.html">Base</a></em>&quot; group. Optionally, install &quot;<em><a href="https://www.ibiblio.org/pub/micro/pc-stuff/freedos/files/repositories/1.3/pkg-html/v8power.html">V8Power</a></em>&quot; package from group &quot;<em>Utilities</em>&quot; for some text UIs.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.sakuragawa.moe/content/images/2022/12/VirtualBox_FreeDOS_09_12_2022_07_11_53.png" class="kg-image" alt="Install FreeDOS - but Manually" loading="lazy" width="720" height="400" srcset="https://blog.sakuragawa.moe/content/images/size/w600/2022/12/VirtualBox_FreeDOS_09_12_2022_07_11_53.png 600w, https://blog.sakuragawa.moe/content/images/2022/12/VirtualBox_FreeDOS_09_12_2022_07_11_53.png 720w" sizes="(min-width: 720px) 720px"><figcaption>Select and install all in <em>FreeDOS Base</em> group</figcaption></figure><h2 id="trouble-shooting">Trouble Shooting</h2><h3 id="unable-to-use-fdimples">Unable to Use FDIMPLES</h3><p>If the error occurs during installation process:</p><blockquote>Error: custom dir &apos;links&apos; is not a valid absolute path</blockquote><p>Make sure you set an absolute (not relative) path in <code>%DOSDIR%</code>.</p>]]></content:encoded></item><item><title><![CDATA[Proxmox VE for Workstation: Install Windows with GPU Passthrough]]></title><description><![CDATA[GPU passthrough on Proxmox VE host enable the guest to provide bare metal like experience - no RDP, no streaming. This creates a limit that per GPU could only be used in 1 guest, but it is still powerful while easier to configure and requires less advanced GPU.]]></description><link>https://blog.sakuragawa.moe/proxmox-ve-for-workstation-install-windows-with-gpu-passthrough/</link><guid isPermaLink="false">6332fadb384ab20001f7cf67</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Quick Reference Handbook]]></category><category><![CDATA[Virtualization]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Tue, 18 Oct 2022 15:11:56 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2022/10/tadas-sar-T01GZhBSyMQ-unsplash.jpg" medium="image"/><content:encoded><![CDATA[<h2 id="overview">Overview</h2><img src="https://blog.sakuragawa.moe/content/images/2022/10/tadas-sar-T01GZhBSyMQ-unsplash.jpg" alt="Proxmox VE for Workstation: Install Windows with GPU Passthrough"><p>GPU passthrough on Proxmox VE host enable the guest to provide bare metal like experience - no RDP, no streaming. This creates a limit that per GPU could only be used in 1 guest, but it is still powerful while easier to configure and requires less advanced GPU.</p><p>Before we start, make sure host device passthrough and GPU passthrough is configured. If they are not, follow the first article in the series:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://blog.sakuragawa.moe/proxmox-ve-for-workstation-configure-gpu-passthrough/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Proxmox VE for Workstation: Configure GPU Passthrough</div><div class="kg-bookmark-description">Proxmox VE is an 0pen-source software server for virtualization management. It is chiefly designed for servers, but it is also great for workstations. Imagine you can have separated workspaces that run different operating systems, and they can run simultaneously!</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://blog.sakuragawa.moe/favicon.png" alt="Proxmox VE for Workstation: Install Windows with GPU Passthrough"><span class="kg-bookmark-author"># journalctl -xeu &quot;Sakuragawa&quot;</span><span class="kg-bookmark-publisher">Sakuragawa Asaba</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://blog.sakuragawa.moe/content/images/2022/09/pasted-image-1.png" alt="Proxmox VE for Workstation: Install Windows with GPU Passthrough"></div></a></figure><h2 id="install-operating-system-and-drivers">Install Operating System and Drivers</h2><p>Create a virtual machine for Windows:</p><pre><code>export VMID=100
qm create $VMID --name windows-ltsc \
                --ostype win10 \
                --sockets 1 --cores 8 \
                --memory 24576 \
                --net0 virtio,bridge=vmbr0,firewall=1 \
                --scsi local-lvm:200 \
                --cdrom0 &lt;Windows Installation ISO&gt; \
                --cdrom1 &lt;Virtio Drivers ISO&gt; \
                --boot order=cdrom0\;scsi0
</code></pre><p>Then start the machine by <code>qm start $VMID</code> and finish the installation of Windows.</p><p>Now, at this point only <a href="https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md"><em>VirtIO</em> drivers</a> need to be installed after Windows installation since no GPU passthrough is configured.</p><h2 id="setup-gpu-passthrough-and-install-drivers">Setup GPU Passthrough and Install Drivers</h2><p>Add GPU as a PCI device and disable built-in display:</p><pre><code>export GPU_LOC=&quot;0000\:01\:00.0&quot;
qm set $VMID --hostpci0 device-id=${GPU_LOC},pcie=1,x-vga=1
qm set $VMID --vga type=none
</code></pre><p>Connect an external monitor to the GPU and start the VM instance. Now Windows should pick up the GPU and display something on the external monitor, and we can proceed to GPU driver installation.</p><h2 id="issue-hunting">Issue Hunting</h2><h3 id="fix-mmap-failure-on-pve-72">Fix mmap Failure on PVE 7.2+</h3><p>On PVE 7.2 and later, mmap might fail. To fix the problem:</p><pre><code>#!/bin/bash
echo 1 &gt; /sys/bus/pci/devices/${GPU_LOC}/remove
echo 1 &gt; /sys/bus/pci/rescan
</code></pre><h3 id="hide-kvm-from-guest">Hide KVM from Guest</h3><p>PVE already hides itself from NVIDIA driver out of the box. Yet we still need to set <code>hv_vendor_id</code> and <code>kvm=off</code> to run some specified games:</p><pre><code>qm set $VMID -args &quot;-cpu &apos;host,-hypervisor,+kvm_pv_unhalt,+kvm_pv_eoi,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_reset,hv_vpindex,hv_runtime,hv_relaxed,kvm=off,hv_vendor_id=Proxmox&apos;&quot;
</code></pre><p>We can use <code>qm showcmd $QMID</code> to check the actual <code>qemu</code> command-line generated by PVE.</p><h2 id="references">References</h2><ul><li><a href="https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/" rel="noopener">Ultimate Beginner&#x2019;s Guide to Proxmox GPU Passthrough</a></li><li><a href="https://www.reddit.com/r/Proxmox/comments/lcnn5w/proxmox_pcie_passthrough_in_2_minutes/">Proxmox PCI(e) Passthrough in 2 minutes</a></li><li><a href="https://forums.unraid.net/topic/103901-solved-aer-pcie-bus-errors/?do=findComment&amp;comment=959783" rel="noopener">[SOLVED] AER PCIe Bus Errors</a></li><li><a href="https://forum.proxmox.com/threads/gpu-passthrough-issues-after-upgrade-to-7-2.109051/#post-469855" rel="noopener">[SOLVED] GPU Passthrough Issues After Upgrade to 7.2</a></li><li><a href="https://forum.proxmox.com/threads/hide-vm-from-guest.34905/post-176978" rel="noopener">Hide vm from guest</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Proxmox VE for Workstation: Configure GPU Passthrough]]></title><description><![CDATA[Proxmox VE is an 0pen-source software server for virtualization management. It is chiefly designed for servers, but it is also great for workstations. Imagine you can have separated workspaces that run different operating systems, and they can run simultaneously! ]]></description><link>https://blog.sakuragawa.moe/proxmox-ve-for-workstation-configure-gpu-passthrough/</link><guid isPermaLink="false">631f324c39a30200015fe005</guid><category><![CDATA[Virtualization]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Getting Started]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Server]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Tue, 27 Sep 2022 13:27:43 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2022/09/pasted-image-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.sakuragawa.moe/content/images/2022/09/pasted-image-1.png" alt="Proxmox VE for Workstation: Configure GPU Passthrough"><p>Proxmox VE is an open-source software server for virtualization management. It is a hosted Type-2 hypervisor, and it can host Linux, as well as Windows and macOS. It is chiefly designed for servers, but it is also great for workstations. Imagine you can have separated workspaces that run different operating systems, and they can run simultaneously! </p><p>Proxmox VE is based on Debian GNU/Linux, so this part also works&#x2122;&#xFE0F; for Debian GNU/Linux.</p><h2 id="prerequisite">Prerequisite</h2><ul><li>Enable VT-d in BIOS</li></ul><h2 id="enable-pcie-passthrough">Enable PCI(e) Passthrough</h2><p>Edit <code>/etc/default/grub</code> to enable IOMMU:</p><pre><code class="language-ini">....
GRUB_CMDLINE_LINUX_DEFAULT=&quot;quiet intel_iommu=on video=efifb:off video=vesafb:off pcie_aspm=off&quot;
....
</code></pre><p>Load VFIO kernel modules in <code>/etc/modules</code>:</p><pre><code>vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
</code></pre><p>Finally, regenerate <code>initramfs</code> and reboot:</p><pre><code class="language-bash">update-grub
update-initramfs -u -k all</code></pre><h2 id="configure-gpu-passthrough">Configure GPU Passthrough</h2><p>Blacklist graphical card drivers to prevent them from being loaded:</p><pre><code class="language-bash">echo &quot;blacklist nouveau&quot; &gt;&gt; /etc/modprobe.d/blacklist.conf
echo &quot;blacklist nvidia&quot; &gt;&gt; /etc/modprobe.d/blacklist.conf
</code></pre><p>Check for the identifier of the GPU:</p><pre><code class="language-shell">root@pve-workstation:~# lspci -v | grep VGA
01:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) (prog-if 00 [VGA controller]</code></pre><p>In this case, the identifier of GPU is <code>01:00.0</code>. Then, check for the device ID:</p><pre><code class="language-bash">root@pve-workstation:~# lspci -n -s 01:00
01:00.0 0300: 10de:1b81 (rev a1)
01:00.1 0403: 10de:10f0 (rev a1)</code></pre><p>Here we have 2 device IDs (GPU and built-in audio). We need to configure VFIO for both of them:</p><pre><code class="language-bash">export GPU_PCI=&quot;01:00&quot;
export GPU_G_DID=&quot;$(lspci -n | grep &apos;$GPU_PCI.0&apos; | cut -d &apos;:&apos; -f 3-4 | cut -c 2-10)&quot;
export GPU_A_DID=&quot;$(lspci -n | grep &apos;$GPU_PCI.1&apos; | cut -d &apos;:&apos; -f 3-4 | cut -c 2-10)&quot;
echo &quot;options vfio-pci ids=${GPU_G_DID},${GPU_A_DID} disable_vga=1&quot;&gt; /etc/modprobe.d/vfio.conf</code></pre><p>To prevent the unsupported MSRS error which leads to crashes, add some tweaks to modules:</p><pre><code class="language-bash">echo &quot;options vfio_iommu_type1 allow_unsafe_interrupts=1&quot; &gt; /etc/modprobe.d/iommu_unsafe_interrupts.conf
echo &quot;options kvm ignore_msrs=1 report_ignored_msrs=0&quot; &gt; /etc/modprobe.d/kvm.conf
</code></pre><p>Now, after a reboot, the graphics card should be ready to be passed through.</p><h2 id="references">References</h2><ul><li><a href="https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/">Ultimate Beginner&apos;s Guide to Proxmox GPU Passthrough</a></li><li><a href="https://forums.unraid.net/topic/103901-solved-aer-pcie-bus-errors/?do=findComment&amp;comment=959783">[SOLVED] AER PCIe Bus Errors</a></li><li><a href="https://forum.proxmox.com/threads/gpu-passthrough-issues-after-upgrade-to-7-2.109051/#post-469855">[SOLVED] GPU Passthrough Issues After Upgrade to 7.2</a></li></ul>]]></content:encoded></item><item><title><![CDATA[Labels in OpenShift 3 Web Console]]></title><description><![CDATA[Labels are used to match application names and resources in OpenShift 3 Web Console.]]></description><link>https://blog.sakuragawa.moe/labels-in-openshift-3-web-console/</link><guid isPermaLink="false">625e61a7dcc72e0001e9f080</guid><category><![CDATA[Container]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Quick Reference Handbook]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Tue, 19 Apr 2022 06:28:00 GMT</pubDate><content:encoded><![CDATA[<p>Labels are used to match application names and resources in OpenShift 3 Web Console.</p><h2 id="application-name">Application Name</h2><p>Set the value of <code>metadata.labels.app</code> in <code>deplomentconfig</code> allow OpenShift to show application name in overview page. Deployment configs with the same value will be grouped together.</p><h2 id="networking-information">Networking Information</h2><p>OpenShift could display related services and routes of a <code>deploymentconfig</code> in overview page. This requires a match between <code>metadata.name</code> and <code>spec.template.metadata.labels.deploymentconfig</code> in a <code>deploymentconfig</code>.</p>]]></content:encoded></item><item><title><![CDATA[Why does Nextcloud not Work with MariaDB>10.6?]]></title><description><![CDATA[<figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://help.nextcloud.com/t/update-to-22-failed-with-database-error-updated/120682/9"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Update to 22 failed with database error - Updated</div><div class="kg-bookmark-description">That&#x2019;s a pretty good idea anyways. I always test new versions of NC and dependencies before I roll them out in &#x201C;production&#x201D;. And that although I only use Nextcloud for personal things. But since I also migrated</div></div></a></figure>]]></description><link>https://blog.sakuragawa.moe/why-does-not-nextcloud-work-with-mariadb-10-6/</link><guid isPermaLink="false">61b94392c855840001e4d472</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Debugging]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Sun, 10 Apr 2022 11:30:43 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2022/04/47895a00-3aef-11ea-97c9-db1565d05c79.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://help.nextcloud.com/t/update-to-22-failed-with-database-error-updated/120682/9"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Update to 22 failed with database error - Updated</div><div class="kg-bookmark-description">That&#x2019;s a pretty good idea anyways. I always test new versions of NC and dependencies before I roll them out in &#x201C;production&#x201D;. And that although I only use Nextcloud for personal things. But since I also migrated my girlfriend from Goggle to my NC, I can&#x2019;t afford downtime anymore &#x1F603;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://help.nextcloud.com/uploads/default/optimized/2X/a/ac4bed144350975da430e98e1b9561230119318f_2_180x180.png" alt="Why does Nextcloud not Work with MariaDB&gt;10.6?"><span class="kg-bookmark-author">Nextcloud community</span><span class="kg-bookmark-publisher">privatewolke</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://help.nextcloud.com/uploads/default/original/2X/a/ac4bed144350975da430e98e1b9561230119318f.png" alt="Why does Nextcloud not Work with MariaDB&gt;10.6?"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/nextcloud/docker/issues/1492#issuecomment-881216028"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Database Incompatibility with MariaDB 10.6.0 &#xB7; Issue #1492 &#xB7; nextcloud/docker</div><div class="kg-bookmark-description">Hey! So I updated mariadb, which I use with nextcloud to version alpha 10.6.0. Which this verison this came https://mariadb.com/kb/en/innodb-system-variables/#innodb_read_only_compressed and gave m...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Why does Nextcloud not Work with MariaDB&gt;10.6?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">nextcloud</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/40802b2fad58b03c60cff301d170311a419f2cec63177882349bed6be909e5af/nextcloud/docker/issues/1492" alt="Why does Nextcloud not Work with MariaDB&gt;10.6?"></div></a></figure><img src="https://blog.sakuragawa.moe/content/images/2022/04/47895a00-3aef-11ea-97c9-db1565d05c79.png" alt="Why does Nextcloud not Work with MariaDB&gt;10.6?"><p>Temporarily disable read-only compressed row in MariaDB:</p><pre><code class="language-sql">SET GLOBAL innodb_read_only_compressed=OFF;</code></pre><p>Then patch the Nextcloud files manually and start upgrade from inside the container:</p><pre><code class="language-console">$ kubectl -n nextcloud exec statefulset/nextcloud -- /bin/bash
# rsync -rlDog --chown www-data:root --delete --exclude-from=/upgrade.exclude /usr/src/nextcloud/ /var/www/html/
# rsync -rlDog --chown www-data:root --include &apos;/version.php&apos; --exclude &apos;/*&apos; /usr/src/nextcloud/ /var/www/html/
# su - www-data -s /bin/bash -c &apos;php -d memory_limit=-1 /var/www/html/occ -vvv upgrade</code></pre><p>To eventually fix this problem, we need to update row type from <code>COMPRESSED</code> to <code>DYNAMIC</code> on all tables. In MariaDB:</p><pre><code class="language-sql">SELECT CONCAT(&apos;ALTER TABLE `&apos;, table_name, &apos;` ROW_FORMAT=DYNAMIC;&apos;) AS aQuery FROM information_schema.tables WHERE table_schema = &apos;nextcloud&apos;</code></pre><p>Then execute the output as commands.</p><hr><p>Nextcloud might add support for MariaDB 10.6 and fix this issue in later version of 23.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/nextcloud/server/pull/30129"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Add MariaDB 10.6 support (drop row_format=compressed) by acsfer &#xB7; Pull Request #30129 &#xB7; nextcloud/server</div><div class="kg-bookmark-description">Keeping MariaDB 10.4 too as both versions have some BC breaks, so tests will run on both (for now).</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Why does Nextcloud not Work with MariaDB&gt;10.6?"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">nextcloud</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/a905b81f147a860956965d1aeaa203ebb3267d5d43f8e12b00f0cf40616b34fd/nextcloud/server/pull/30129" alt="Why does Nextcloud not Work with MariaDB&gt;10.6?"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Setting up Containerized FreeIPA & KeyCloak Single Sign-On]]></title><description><![CDATA[Explore the Red Hat solution on LDAP + OIDC / SAML.]]></description><link>https://blog.sakuragawa.moe/setting-up-containerized-freeipa-keycloak-single-sign-on/</link><guid isPermaLink="false">60ea2fda066af000014d9e92</guid><category><![CDATA[Container]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Getting Started]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Thu, 09 Dec 2021 08:55:51 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2021/07/keycloak-sssd-freeipa-integration-overview.png" medium="image"/><content:encoded><![CDATA[<h2 id="initialize-freeipa-container-data">Initialize FreeIPA Container Data</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://hub.docker.com/r/freeipa/freeipa-server/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Docker Hub</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"></div></div></a></figure><img src="https://blog.sakuragawa.moe/content/images/2021/07/keycloak-sssd-freeipa-integration-overview.png" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><p>The <a href="https://hub.docker.com/r/freeipa/freeipa-server/">official FreeIPA container image</a> requires a one-time installation process before running. For installation, a file containing <code>ipa-server-install</code> options should be provided, and Docker command should be <code>ipa-server-install -U</code>.</p><p>To complete this one-time process, create a <code>docker-compose</code> YAML file:</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">version: &quot;3&quot;
services:
  &quot;central&quot;:
    container_name: &quot;ipa-central&quot;
    image: &quot;docker.io/freeipa/freeipa-server:${FREEIPA_VERSION:-centos-8-stream}&quot;
    command: &quot;ipa-server-install -U&quot;
    volumes:
      - &quot;/sys/fs/cgroup:/sys/fs/cgroup:ro&quot;
      - &quot;./ipa-central/data:/data&quot;
      - &quot;./ipa-central/freeipa.options:/data/ipa-server-install-options&quot;
    read_only: true
    hostname: &quot;${FREEIPA_HOSTNAME}&quot;
    sysctls:
      net.ipv6.conf.all.disable_ipv6: 0</code></pre><figcaption>Content of <code>install.yml</code></figcaption></figure><p>Then start the process by <code>docker-compose -f install.yml up</code>. After installation is success, start the FreeIPA server container with <code>docker-compose -f run.yml up</code>:</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">version: &quot;3&quot;
services:
  &quot;central-ipa&quot;:
    container_name: &quot;ipa-central&quot;
    image: &quot;docker.io/freeipa/freeipa-server:${FREEIPA_VERSION:-centos-8-stream}&quot;
    volumes:
      - &quot;/sys/fs/cgroup:/sys/fs/cgroup:ro&quot;
      - &quot;./ipa-central/data:/data&quot;
    read_only: true
    ports:
      - 127.0.0.1:9443:443
      - 389:389
      - 636:636
      - 88:88
      - 88:88/udp
      - 464:464
      - 464:464/udp
      - 123:123/udp
    hostname: &quot;${FREEIPA_HOSTNAME}&quot;
    sysctls:
      net.ipv6.conf.all.disable_ipv6: 0</code></pre><figcaption>Content of <code>run.yml</code></figcaption></figure><h2 id="post-installation-setup">Post-installation Setup</h2><h3 id="ldap-service-account">LDAP Service Account</h3><p>There are some LDAP clients that need a pre-configured account. Do not use the Directory Manager account to authenticate remote services to the IPA LDAP server. A service account could be created like this:</p><pre><code class="language-ldif">[root@ipa /]# ldapmodify -x -D &apos;cn=Directory Manager&apos; -W
Enter LDAP Password: &lt;FreeIPA Directory Password&gt;
dn: uid=authenticator,cn=serviceaccounts,cn=etc,dc=sakuragawa,dc=cloud
changetype: add
objectclass: account
objectclass: simplesecurityobject
uid: authenticator
userPassword: &lt;password&gt;
passwordExpirationTime: 20380119031407Z
nsIdleTimeout: 0

adding new entry &quot;uid=authenticator,cn=serviceaccounts,cn=etc,dc=sakuragawa,dc=cloud&quot;
&lt;Ctrl-D&gt;
</code></pre><blockquote>The reason to use an account like this rather than creating a normal user account in IPA and using that is that the system account exists only for binding to LDAP. It is not a real POSIX user, can&#x2019;t log into any systems and doesn&#x2019;t own any files.</blockquote><blockquote>This use also has no special rights and is unable to write any data in the IPA LDAP server, only read.</blockquote><p>References:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.freeipa.org/page/HowTo/LDAP#System_Accountshttps://www.freeipa.org/page/HowTo/LDAP#System_Accounts"><div class="kg-bookmark-content"><div class="kg-bookmark-title">HowTo/LDAP - FreeIPA</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://www.freeipa.org/images/freeipa/favicon.ico" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><span class="kg-bookmark-author">FreeIPA</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.freeipa.org/images/freeipa/freeipa-logo-small.png" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://github.com/noahbliss/freeipa-sam"><div class="kg-bookmark-content"><div class="kg-bookmark-title">noahbliss/freeipa-sam</div><div class="kg-bookmark-description">System Account Manager for FreeIPA. Contribute to noahbliss/freeipa-sam development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">noahbliss</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/3c6c3a47f3aa3a232dd868fa3f667bf336431435ae51887de7cf9c2894626478/noahbliss/freeipa-sam" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a><figcaption><a href="https://github.com/noahbliss/freeipa-sam" rel="noopener">freeipa-sam</a>: a shell helper to manage system accounts (by <a href="https://lists.fedorahosted.org/archives/list/freeipa-devel@lists.fedorahosted.org/message/AI4WSAMPKF4OSV6DFMKKTDEK4P7Y33SF/">Noah Bliss</a>)</figcaption></figure><h3 id="allow-anonymous-to-read-mail-attribute">Allow Anonymous to Read mail Attribute</h3><p>By default only bound (or authenticated) users are allowed to read <code>mail</code> attribute. For internal usage, querying the attribute anonymously is safe.</p><pre><code class="language-bash">$ ipa permission-add &apos;Mail Readable by Anonymous&apos; --type=user --attrs=mail --bindtype=anonymous --permissions=read
</code></pre><h2 id="upgrade-containerized-freeipa">Upgrade Containerized FreeIPA</h2><p>theoretically, after image is changed, FreeIPA container will start upgrade process automatically. But there was some common issues during the upgrade.</p><h3 id="dirsrv-failed-to-start">DIRSRV Failed to Start</h3><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://bugzilla.redhat.com/show_bug.cgi?id=2023056#c3"><div class="kg-bookmark-content"><div class="kg-bookmark-title">2023056 &#x2013; After upgrading ipa package, ipa server fails to start with dirsrv init error (/usr/lib64/dirsrv/plugins/libpwdstorage-plugin.so: undefined symbol: gost_yescrypt_pwd_storage_scheme_init)</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://bugzilla.redhat.com/extensions/RedHat/web/css/favicons/production.ico?v=0" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></div></a></figure><blockquote>So if an instance was created with early 8.5 builds, a plugin entry (dn: cn=GOST_YESCRYPT,cn=Password Storage Schemes,cn=plugins,cn=config) was created. Then the upgrade removed the init callback and startup fails.</blockquote><blockquote>A quick relief is by editing dse.ldif and removing cn=GOST_YESCRYPT,cn=Password Storage Schemes,cn=plugins,cn=config.</blockquote><h3 id="sssd-failed-to-start">SSSD Failed to Start</h3><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://bugzilla.redhat.com/show_bug.cgi?id=988207"><div class="kg-bookmark-content"><div class="kg-bookmark-title">988207 &#x2013; sssd does not detail which line in configuration is invalid</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://bugzilla.redhat.com/extensions/RedHat/web/css/favicons/production.ico?v=0" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></div></a></figure><p>SSSD will fail to start if the permission and owner of configuration &#xA0;file at<code>/etc/sssd/sssd/.conf</code> isn&apos;t set properly.</p><pre><code># chown root /etc/sssd/sssd.conf
# chgrp root /etc/sssd/sssd.conf
# chmod 600  /etc/sssd/sssd.conf</code></pre><hr><h2 id="deploy-keycloak-container">Deploy KeyCloak Container</h2><pre><code class="language-yaml">version: &quot;3&quot;
services:
  &quot;central-sso&quot;:
    container_name: &quot;keycloak-central&quot;
    image: &quot;quay.io/keycloak/keycloak:${KEYCLOAK_VERSION:-14.0.0}&quot;
    ports:
      - 127.0.0.1:9080:8080
      - 127.0.0.1:9990:9990
    environment:
      DB_VENDOR: &quot;mariadb&quot;
      DB_ADDR: &quot;db-master.example.com&quot;
      DB_PORT: &quot;3306&quot;
      DB_DATABASE: &quot;keycloak_skg&quot;
      DB_USER: &quot;keycloak&quot;
      DB_PASSWORD: &quot;${DB_PASSWORD}&quot;
      PROXY_ADDRESS_FORWARDING: &quot;true&quot;
      KEYCLOAK_USER: &quot;admin&quot;
      KEYCLOAK_PASSWORD: &quot;${KEYCLOAK_PASSWORD}&quot;
</code></pre><p>Note that it is important to set <code>PROXY_ADDRESS_FORWARDING=true</code> in environment variables especially when KeyCloak is served behind a reverse proxy.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://keycloak.discourse.group/t/getting-invalid-parameter-redirect-uri-when-installed-with-keycloak-operator/830"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Getting Invalid parameter: redirect_uri when installed with keycloak-operator</div><div class="kg-bookmark-description">Hi I have been trying to install Keycloak on a K8s environment but with limited success so far. I just followed the instructions of the keycloak-operator: make cluster/prepare kubectl apply -f deploy/operator.yaml kubectl apply -f deploy/examples/keycloak/keycloak.yaml This worked fine and I got&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://aws1.discourse-cdn.com/free1/uploads/keycloak/optimized/1X/eb342909d95cf32cbb7517610022c6a0046a9ffb_2_180x180.png" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><span class="kg-bookmark-author">Keycloak</span><span class="kg-bookmark-publisher">fabricepipart</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://aws1.discourse-cdn.com/free1/uploads/keycloak/original/1X/eb342909d95cf32cbb7517610022c6a0046a9ffb.png" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://gitmemory.com/issue/codecentric/helm-charts/70/509492747"><div class="kg-bookmark-content"><div class="kg-bookmark-title">nginx ingress: Invalid parameter: redirect_uri on admin access - helm-charts</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><span class="kg-bookmark-author">helm-charts</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://raw.githubusercontent.com/github/explore/fd96fceccf8c42c99cbe29cf0f8dcc4736fcb85a/topics/nodejs/nodejs.png" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure><hr><h2 id="securing-services-behind-oauth2proxy">Securing Services Behind OAuth2Proxy</h2><p>Generate a cookie secret first:</p><pre><code class="language-bash">$ export OAUTH2_PROXY_COOKIE_SECRET=`python -c &apos;import os,base64; print(base64.urlsafe_b64encode(os.urandom(16)).decode())&apos;`</code></pre><p>Then we will create a <code>sakuragawa-sso.env</code> to store all authentication backed related environment variables. Since there is still issues with Keycloak backend, we will turn to use OpenID Connect (OIDC) backend here.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/oauth2-proxy/oauth2-proxy/issues/1093"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Issues with Keycloak integration &#xB7; Issue #1093 &#xB7; oauth2-proxy/oauth2-proxy</div><div class="kg-bookmark-description">Integrating oauth2-proxy with Keycloak is not worknig. When I inspected the logs, it says Invalid parameter value for: scope Expected Behavior Must be able to login to the respective service using ...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">oauth2-proxy</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/9d0ed0a5745c51ce581da4063095d14b108b70ed10a2e6a260afbf9396e08063/oauth2-proxy/oauth2-proxy/issues/1093" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/oauth2-proxy/oauth2-proxy/issues/956"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Keycloak Provider is just Barebones OAuth - Doesn&#x2019;t Expose OIDC Potential &#xB7; Issue #956 &#xB7; oauth2-proxy/oauth2-proxy</div><div class="kg-bookmark-description">We have a keycloak provider that most Keycloak users likely gravitate to by default. It is a very barebones implementation mostly riding our default OAuth provider methods, only overriding the User...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">oauth2-proxy</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/ef27cba3ea2af5d6a3075a9870eead9a0b905cdeca3352f05b4e09a2ad7d2da2/oauth2-proxy/oauth2-proxy/issues/956" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure><figure class="kg-card kg-code-card"><pre><code class="language-ini">OAUTH2_PROXY_PROVIDER=oidc
OAUTH2_PROXY_PROVIDER_DISPLAY_NAME=Sakuragawa SSO
OAUTH2_PROXY_CLIENT_ID=&lt;client ID&gt;
OAUTH2_PROXY_CLIENT_SECRET=&lt;client secret&gt;
OAUTH2_PROXY_OIDC_ISSUER_URL=https://sso.central.sakuragawa.cloud/auth/realms/SKG
OAUTH2_PROXY_INSECURE_OIDC_ALLOW_UNVERIFIED_EMAIL=true
OAUTH2_PROXY_ALLOWED_GROUPS=&lt;keycloak group 1&gt;, &lt;keycloak group 2&gt;</code></pre><figcaption>Content of <code>sakuragawa-sso.env</code></figcaption></figure><p>Then create an orchestration file which includes global settings:</p><figure class="kg-card kg-code-card"><pre><code class="language-yaml">version: &quot;3&quot;
services:
  auth:
    image: &quot;quay.io/oauth2-proxy/oauth2-proxy:latest&quot;
    ports:
      - &quot;127.0.0.1:8180:4180&quot;
    environment:
      OAUTH2_PROXY_UPSTREAMS: &quot;http://&lt;upstream&gt;:5000/&quot;
      OAUTH2_PROXY_WHITELIST_DOMAINS: &quot;.central.sakuragawa.cloud:*&quot;
      OAUTH2_PROXY_HTTP_ADDRESS: &quot;:4180&quot;
      OAUTH2_PROXY_COOKIE_SECRET:
      OAUTH2_PROXY_REVERSE_PROXY: &quot;true&quot;
      OAUTH2_PROXY_EMAIL_DOMAINS: &quot;example.com,example.org&quot;
      OAUTH2_PROXY_PASS_USER_HEADERS: &quot;true&quot;
    env_file:
      - sakuragawa-sso.env
    restart: &quot;unless-stopped&quot;
</code></pre><figcaption>Content of <code>docker-compose.yml</code></figcaption></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/oauth2-proxy/oauth2-proxy/issues/117"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Ability to ignore unverified email &#xB7; Issue #117 &#xB7; oauth2-proxy/oauth2-proxy</div><div class="kg-bookmark-description">Keycloak doesn&amp;#39;t verify email for users imported from LDAP https://stackoverflow.com/questions/38307029/ldap-federated-users-automatic-set-email-verified-to-true So, I have to change default ma...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">oauth2-proxy</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/355c2af9ee2a6f9dd9d9c7514c9e6f84b00f04279ba00d6fc5b3a05501bd9ca8/oauth2-proxy/oauth2-proxy/issues/117" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure><hr><h1 id="references">References</h1><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://access.redhat.com/solutions/3010401"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Federation with Keycloak and FreeIPA - Red Hat Customer Portal</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://access.redhat.com/webassets/avalon/g/favicon.ico" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><span class="kg-bookmark-author">Red Hat Customer Portal</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://access.redhat.com/sites/default/files/images/certificates.png" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://www.talkingquickly.co.uk/gitea-sso-with-keycloak-openldap-openid-connect"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Gitea SSO with Keycloak, OpenLDAP and OpenID Connect</div><div class="kg-bookmark-description">Blog by Ben Dixon, Ruby on Rails Developer, about rails, kubernetes, docker, climbing and startups</div><div class="kg-bookmark-metadata"><span class="kg-bookmark-publisher">Catapult</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://www.talkingquickly.co.uk/assets/images/profile4.jpg" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://janikvonrotz.ch/2020/04/21/configure-saml-authentication-for-nextcloud-with-keycloack/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Configure SAML Authentication for Nextcloud with Keycloack</div><div class="kg-bookmark-description">Introduction The complex problems of identity and access management (IAM) have challenged big companies and in result we got powerful protocols, technologies and concepts such as SAML, oAuth, Keycloack, tokens and much more. The goal of IAM is simple. Centralize all identities, policies and get ri&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://janikvonrotz.ch/images/logo.png" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"><span class="kg-bookmark-author">Configure SAML Authentication for Nextcloud with Keycloack</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://janikvonrotz.ch/images/building%20connection.jpg" alt="Setting up Containerized FreeIPA &amp; KeyCloak Single Sign-On"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Etcd does not Start after Disk is Full]]></title><description><![CDATA[Restoring etcd is not an easy task - even today.]]></description><link>https://blog.sakuragawa.moe/etcd-does-not-start-after-disk-is-full/</link><guid isPermaLink="false">619ab0baac0eee0001826673</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Debugging]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Open Source]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Mon, 22 Nov 2021 13:43:21 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2021/11/etcd-horizontal-color.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/etcd-io/etcd/issues/3591"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Unable to retsart after disk-full &#xB7; Issue #3591 &#xB7; etcd-io/etcd</div><div class="kg-bookmark-description">Hi, I&amp;#39;m unable to restart a standalone etcd2 server after a disk full event on CoreOS stable 723.3.0. I upgraded CoreOS to the latest stable (766.3.0) as it has etcd 2.1 which should recovers f...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Etcd does not Start after Disk is Full"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">etcd-io</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/48fe2524f49358a979f9e00408b5ae76903f90ec6b4c5416047464a99ce884ac/etcd-io/etcd/issues/3591" alt="Etcd does not Start after Disk is Full"></div></a></figure><h2 id="round-1-etcdmain-read-wal-inputoutput-error">Round 1: etcdmain: read .wal: input/output error</h2><pre><code class="language-log">2021-11-21 20:43:04.262587 I | embed: initial cluster =
2021-11-21 20:43:04.280202 W | wal: ignored file 000000000000035c-00000000045d83a6.wal.broken in wal
2021-11-21 20:43:04.280256 W | wal: ignored file 0000000000000375-0000000004764aca.wal.broken in wal
2021-11-21 20:43:08.251726 C | etcdmain: read /var/lib/rancher/etcd/member/wal/0000000000000375-0000000004764aca.wal: input/output error</code></pre><img src="https://blog.sakuragawa.moe/content/images/2021/11/etcd-horizontal-color.png" alt="Etcd does not Start after Disk is Full"><p>We can find the missing file is renamed from <code>.wal</code> to <code>.wal.broken</code>. We could rename the file back to <code>.wal</code>.</p><h2 id="round-2-etcdmain-walpb-crc-mismatch">Round 2: etcdmain: walpb: crc mismatch</h2><pre><code class="language-log">2021-11-21 20:50:19.312596 I | embed: initial cluster =
2021-11-21 20:50:19.323572 W | wal: ignored file 000000000000035c-00000000045d83a6.wal.broken in wal
2021-11-21 20:50:19.507366 C | etcdmain: walpb: crc mismatch</code></pre><p>To solve this problem, one choice is to save and restore a snapshot.</p><p>Before starting, we could get some preparation:</p><pre><code class="language-shell"># export NODE_NAME=container-1-paas-central-sakuragawa-cloud
# export KUBE_CA=/etc/kubernetes/ssl/kube-ca.pem
# export ETCD_PEM=/etc/kubernetes/ssl/kube-etcd-$NODE_NAME.pem
# export ETCD_KEY=/etc/kubernetes/ssl/kube-etcd-$NODE_NAME-key.pem
# export ETCD_SERVER=https://container-2:2379
# alias etcdctl=&quot;etcdctl --cert=$ETCD_PEM --key=$ETCD_KEY --cacert=$KUBE_CA --endpoints=$ETCD_SERVER&quot;</code></pre><p>Save a snapshot from other working nodes:</p><pre><code class="language-shell"># etcdctl snapshot save snapshot.db
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1637576476.7098856,&quot;caller&quot;:&quot;snapshot/v3_snapshot.go:68&quot;,&quot;msg&quot;:&quot;created temporary db file&quot;,&quot;path&quot;:&quot;snapshot.db.part&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1637576476.722268,&quot;logger&quot;:&quot;client&quot;,&quot;caller&quot;:&quot;v3/maintenance.go:211&quot;,&quot;msg&quot;:&quot;opened snapshot stream; downloading&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1637576476.7223496,&quot;caller&quot;:&quot;snapshot/v3_snapshot.go:76&quot;,&quot;msg&quot;:&quot;fetching snapshot&quot;,&quot;endpoint&quot;:&quot;https://container-2:2379&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1637576478.531543,&quot;logger&quot;:&quot;client&quot;,&quot;caller&quot;:&quot;v3/maintenance.go:219&quot;,&quot;msg&quot;:&quot;completed snapshot read; closing&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1637576479.3304148,&quot;caller&quot;:&quot;snapshot/v3_snapshot.go:91&quot;,&quot;msg&quot;:&quot;fetched snapshot&quot;,&quot;endpoint&quot;:&quot;https://container-2:2379&quot;,&quot;size&quot;:&quot;25 MB&quot;,&quot;took&quot;:&quot;2 seconds ago&quot;}
{&quot;level&quot;:&quot;info&quot;,&quot;ts&quot;:1637576479.3305554,&quot;caller&quot;:&quot;snapshot/v3_snapshot.go:100&quot;,&quot;msg&quot;:&quot;saved&quot;,&quot;path&quot;:&quot;snapshot.db&quot;}
Snapshot saved at snapshot.db</code></pre><p>Then stop <code>etcd</code> server and restore the snapshot:</p><pre><code class="language-shell"># docker stop etcd
# mv /var/lib/etcd/ /var/lib/etcd.old/
# etcdctl snapshot restore snapshot.db --data-dir=/var/lib/etcd/</code></pre><h2 id="round-3-restore-from-snapshot-again">Round 3: Restore from Snapshot (Again)</h2><p>After <code>etcd</code> starts, it is not back to work as expected:</p><pre><code class="language-shell">2021-11-22 11:46:37.003240 E | rafthttp: request cluster ID mismatch (got e8061948cdccadb7 want cdf818194e3a8c32)</code></pre><p>First stop the node agent (again) and <code>etcd</code> container and delete all the <code>etcd</code> local data:</p><pre><code class="language-shell"># docker stop etcd
# rm -f /var/lib/etcd/member/wal/*
# rm -f /var/lib/etcd/member/snap/*</code></pre><p>To restore the node, we need some parameters from the origin node. We can run <code>docker inspect etcd</code> and collect some information from <code>Args</code> and <code>Envs</code> :</p><ul><li><code>--name</code></li><li><code>--initial-cluster</code></li><li><code>--initial-cluster-token</code></li><li><code>--initial-advertise-peer-urls</code></li></ul><p>Then use these parameters when restoring from a snapshot:</p><pre><code class="language-json"># etcdctl snapshot restore snapshot.db \
	--cert=$ETCD_PEM \
	--key=$ETCD_KEY \
    --cacert=$KUBE_CA \
    --name=etcd-container-1 \
    --initial-cluster=etcd-container-1=&quot;https://container-1:2380,etcd-container-2=https://container-2:2380&quot; \
    --initial-cluster-token=&quot;etcd-cluster-1&quot; \
    --initial-advertise-peer-urls=&quot;https://container-1:2380&quot; \
    --data-dir /var/lib/etcd
Deprecated: Use `etcdutl snapshot restore` instead.

2021-11-22T21:07:11+08:00       info    netutil/netutil.go:112  resolved URL Host       {&quot;url&quot;: &quot;https://container-1:2380&quot;, &quot;host&quot;: &quot;container-1:2380&quot;, &quot;resolved-addr&quot;: &quot;192.168.1.21:2380&quot;}
2021-11-22T21:07:11+08:00       info    netutil/netutil.go:112  resolved URL Host       {&quot;url&quot;: &quot;https://container-1:2380&quot;, &quot;host&quot;: &quot;container-1:2380&quot;, &quot;resolved-addr&quot;: &quot;192.168.1.21:2380&quot;}
2021-11-22T21:07:11+08:00       info    snapshot/v3_snapshot.go:251     restoring snapshot      {&quot;path&quot;: &quot;snapshot-20211122.db&quot;, &quot;wal-dir&quot;: &quot;/var/lib/etcd/member/wal&quot;, &quot;data-dir&quot;: &quot;/var/lib/etcd&quot;, &quot;snap-dir&quot;: &quot;/var/lib/etcd/member/snap&quot;, &quot;stack&quot;: &quot;go.etcd.io/etcd/etcdutl/v3/snapshot.(*v3Manager).Restore\n\t/tmp/etcd-release-3.5.1/etcd/release/etcd/etcdutl/snapshot/v3_snapshot.go:257\ngo.etcd.io/etcd/etcdutl/v3/etcdutl.SnapshotRestoreCommandFunc\n\t/tmp/etcd-release-3.5.1/etcd/release/etcd/etcdutl/etcdutl/snapshot_command.go:147\ngo.etcd.io/etcd/etcdctl/v3/ctlv3/command.snapshotRestoreCommandFunc\n\t/tmp/etcd-release$3.5.1/etcd/release/etcd/etcdctl/ctlv3/command/snapshot_command.go:128\ngithub.com/spf13/cobra.(*Command).execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.$6.3/global/pkg/mod/github.com/spf13/cobra@v1.1.3/command.go:856\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/$lobal/pkg/mod/github.com/spf13/cobra@v1.1.3/command.go:960\ngithub.com/spf13/cobra.(*Command).Execute\n\t/home/remote/sbatsche/.gvm/pkgsets/go1.16.3/global$pkg/mod/github.com/spf13/cobra@v1.1.3/command.go:897\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.Start\n\t/tmp/etcd-release-3.5.1/etcd/release/etcd/etcdctl/ctlv3/ctl$go:107\ngo.etcd.io/etcd/etcdctl/v3/ctlv3.MustStart\n\t/tmp/etcd-release-3.5.1/etcd/release/etcd/etcdctl/ctlv3/ctl.go:111\nmain.main\n\t/tmp/etcd-release-3.$.1/etcd/release/etcd/etcdctl/main.go:59\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225&quot;}
2021-11-22T21:07:12+08:00       info    membership/store.go:141 Trimming membership information from the backend...
2021-11-22T21:07:12+08:00       info    membership/cluster.go:421       added member    {&quot;cluster-id&quot;: &quot;e8061948cdccadb7&quot;, &quot;local-member-id&quot;: &quot;0&quot;, &quot;added-p$er-id&quot;: &quot;94447210df55f5df&quot;, &quot;added-peer-peer-urls&quot;: [&quot;https://container-2:2380&quot;]}
2021-11-22T21:07:12+08:00       info    membership/cluster.go:421       added member    {&quot;cluster-id&quot;: &quot;e8061948cdccadb7&quot;, &quot;local-member-id&quot;: &quot;0&quot;, &quot;added-p$er-id&quot;: &quot;e089ac38e0c2282f&quot;, &quot;added-peer-peer-urls&quot;: [&quot;https://container-1:2380&quot;]}
2021-11-22T21:07:12+08:00       info    snapshot/v3_snapshot.go:272     restored snapshot       {&quot;path&quot;: &quot;snapshot-20211122.db&quot;, &quot;wal-dir&quot;: &quot;/var/lib/etcd/$ember/wal&quot;, &quot;data-dir&quot;: &quot;/var/lib/etcd&quot;, &quot;snap-dir&quot;: &quot;/var/lib/etcd/member/snap&quot;}</code></pre><p>Finally, start <code>etcd</code> and the node will be back to the cluster.</p><pre><code class="language-shell">$ docker start etcd
$ docker exec -e ETCDCTL_ENDPOINTS=$(docker exec etcd /bin/sh -c &quot;etcdctl member list | cut -d, -f5 | sed -e &apos;s/ //g&apos; | paste -sd &apos;,&apos;&quot;) etcd etcdctl endpoint status --write-out table
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT         |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://container-2:2379 | 94447210df55f5df |  3.4.14 |   25 MB |      true |      false |     82444 |   75080617 |           75080617 |        |
| https://container-1:2379 | e089ac38e0c2282f |  3.4.14 |   25 MB |     false |      false |     82444 |   75080617 |           75080617 |        |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+</code></pre><h2 id="references">References</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://rancher.com/docs/rancher/v2.5/en/troubleshooting/kubernetes-components/etcd/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Troubleshooting etcd Nodes</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://rancher.com/docs/img/favicon.png" alt="Etcd does not Start after Disk is Full"><span class="kg-bookmark-author">Rancher Labs</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://rancher.com/docs/img/logo-square.png" alt="Etcd does not Start after Disk is Full"></div></a></figure>]]></content:encoded></item><item><title><![CDATA[Better Home Storage: MergerFS + SnapRAID on OpenMediaVault]]></title><description><![CDATA[There is a way to enjoy some benefits from RAID without "real RAID" - and a supplement to scheduled cold backup for home data storage.]]></description><link>https://blog.sakuragawa.moe/better-home-storage-mergerfs-snapraid-on-openmediavault/</link><guid isPermaLink="false">617a5c2a2730030001ea5cca</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Linux]]></category><category><![CDATA[Open Source]]></category><category><![CDATA[Quick Reference Handbook]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Sun, 31 Oct 2021 15:55:09 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2021/10/snipaste20211031_235221.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.sakuragawa.moe/content/images/2021/10/snipaste20211031_235221.png" alt="Better Home Storage: MergerFS + SnapRAID on OpenMediaVault"><p>I decided to build a NAS with old hard drives chiefly for serving media files like movie, TV shows and music.</p><p>When building a multi-bay system, it&apos;s inevitable to consider which kind of disk redundancy to use. Traditional RAID (both hard and soft ones) would work, but it also have several cons:</p><ul><li>Data are distributed onto disks so that it could be not read from a single drive</li><li>Requires several disks to work simultaneously when reading or writing</li><li>If all parity disks plus 1 more disk corrupt, all data will be lost</li></ul><p>For home data storage, I&apos;d prefer scheduled cold backup. But is there any way to enjoy some benefits from RAID without &quot;real RAID&quot;? Yes, there is!</p><h2 id="backgrounds">Backgrounds</h2><h3 id="mergerfs">MergerFS</h3><p>MergerFS could pool multiple filesystems or folders into one place. It allows us to create RAID0-like filesystems but keeps data intact when some drives are broken.</p><blockquote><strong>mergerfs</strong> is a union filesystem geared towards simplifying storage and management of files across numerous commodity storage devices. It is similar to <strong>mhddfs</strong>, <strong>unionfs</strong>, and <strong>aufs</strong>.</blockquote><h3 id="snapraid">SnapRAID</h3><p>SnapRAID is short for &quot;Snapshot RAID&quot;. It is a backup program for disk arrays and capable of recovering from up to six disk failures. It is mainly targeted for a home media center, with a lot of big files that rarely change.</p><p>Some highlights of SnapRAID are:</p><ul><li>The disks can have different sizes</li><li>Add disks at any time</li><li>Quit SnapRAID without moving data and formatting disks</li><li>Anti-delete function</li></ul><h3 id="differences-between-mergerfsfolders-unionfilesystems">Differences between <code>mergerfsfolders</code> &amp; <code>unionfilesystems</code></h3><p>There are 2 similar plugins based on <code>mergerfs</code> in OMV-Extras:</p><ul><li><code>mergerfsfolders</code> pools folders</li><li><code>unionfilesystems</code> pools drives</li></ul><p>With &#xA0;blank new drives here, we would use <code>unionfilesystems</code> for simplified configuration.</p><h2 id="prepare-system-and-drives">Prepare System and Drives</h2><p>I have 4&#xD7;2TiB hard drives and plan to use 3 of them for data and 1 for parity check.</p><p>In OpenMediaVault:</p><ol><li>Install and configure OpenMediaVault as needed.</li><li>Install <a href="https://einverne.github.io/post/2020/03/openmediavault-setup.html">OMV-Extras</a> repository.</li><li>Install <code>openmediavault-snapraid</code> and <code>openmediavault-unionfilesystem</code> plugin.</li><li>Wipe all the disks and create filesystems on them.</li></ol><p>I also labeled all file systems with <code>DXX</code> for data disks and <code>PXX</code> for the lonely parity disk.</p><h2 id="configure-mergerfs">Configure MergerFS</h2><p>In OpenMediaVault control panel:</p><ol><li>Navigate to <em>Storage</em> &gt; <em>Union Filesystems</em>.</li><li>Click on <em>Add</em>.</li><li>Type a pool name in <em>Name</em>.</li><li>Check all data disks (but exclude parity disks) in <em>Branches</em>.</li><li>Select <em>Existing path, most free space</em> (or <code>epmfs</code>) in <em>Create Policy</em>.</li><li><em>Save and apply.</em></li></ol><p>Here I set <em>create policy</em> to <code>epmfs</code> because I want the files gather together as possible as they could be. Under <code>epmfs</code> policy, new files will be created together in the same drive when the upper level folder path exists. You can check other available <a href="https://github.com/trapexit/mergerfs#policy-descriptions"><em>create policies</em></a> upon your needs.</p><p>Note that some applications (`libttorrent` applications, sqlite3, etc.) requires <code>mmap</code>. Since I would use qBittorrent, I need to add some options:</p><pre><code>allow_other,use_ino,cache.files=partial,dropcacheonclose=true,...</code></pre><p>Also as I have labeled data disks as <code>D01</code>, <code>D02</code> and <code>D03</code>, I expect MergerFS write files into them in that given order. Unfortunately this order could not be specified or modified via the control panel. I have to edit <code>/etc/fstab</code>:</p><pre><code class="language-fstab">....
/srv/dev-disk-by-uuid-${D01}:/srv/dev-disk-by-uuid-${D02}:/srv/dev-disk-by-uuid-${D03}            /srv/${MERGERFS_UUID}       fuse.mergerfs   defaults,allow_other,cache.files=off,use_ino,category.create=ff,minfreespace=4G,fsname=${MERGERFS_NAME}:${MERGERFS_UUID},x-systemd.requires=/srv/dev-disk-by-uuid-${D01},x-systemd.requires=/srv/dev-disk-by-uuid-${D02},x-systemd.requires=/srv/dev-disk-by-uuid-${D03} 0 0
....</code></pre><p>Save the file, <code>umount</code> and <code>mount</code> MergerFS mountpoint. Now the writing order is corrected.</p><h2 id="configure-snapraid">Configure SnapRAID</h2><p>First configure data disks. In OpenMediaVault control panel:</p><ol><li>Navigate to <em>Services </em>&gt; <em>SnapRAID</em>.</li><li>Navigate to <em>Disks</em> tab.</li><li>Click on <em>Add</em>.</li><li>Select a data disk for <em>Disk</em>.</li><li>Check <em>Content</em> and <em>Data</em>, then click on <em>Save</em>.</li></ol><p>Repeat this until all data disks are added. Then configure parity disks:</p><ol><li>Repeat step 1-4 when configuring data disks.</li><li>Check <em>Parity</em> , then click on <em>Save</em>.</li></ol><p>Also repeat this until all parity disks are configured.</p><h2 id="initialize-and-use-snapraid">Initialize and Use SnapRAID</h2><h3 id="first-start">First Start</h3><p>After SnapRAID is set up for the first time, you should synchronize the data, create and check parity. There are several useful commands:</p><ul><li><code>sync</code> hashes files and save the parity.</li><li><code>scrub</code> verifies the data with the generated hashes and detect errors</li><li><code>status</code> shows the current array status</li></ul><p>So in combined:</p><pre><code class="language-bash">snapraid {sync, scrub, status}</code></pre><h3 id="rules">Rules</h3><p>Not every piece of data worth backing up. Most of the time for home usage, only &quot;static files&quot; need to be backed up. For example, there is no strong necessity to back up container images, caches and other temporary files. These directories or files could be excluded in <em>Rules</em> tab.</p><h3 id="scheduled-diff-and-cron-jobs">Scheduled Diff and Cron Jobs</h3><p>Click on <em>Scheduled diff</em> button and SnapRAID plugin will generate a cron job in <em>Scheduled Jobs</em> tab. This job only contains <code>diff</code> operation which calculates file differences from the last snap. To maintain the parity synchronized, &#xA0;<code>sync</code> and <code>scrub</code> jobs are also needed. If you plan to synchronize automatically, you would like to <a href="https://gist.github.com/mtompkins/91cf0b8be36064c237da3f39ff5cc49d">check a helper script when creating jobs</a>.</p><p>Sometimes empty files could be created by programs and cause errors:</p><pre><code>The file &apos;${FILENAME}&apos; has unexpected zero size!
It&apos;s possible that after a kernel crash this file was lost,
and you can use &apos;snapraid fix -f ${FILENAME}&apos; to recover it.
If this an expected condition you can &apos;sync&apos; anyway using &apos;snapraid --force-zero sync&apos;</code></pre><p>If that file seems to be fine with a zero content with you, simply add the <code>--force-zero</code> parameter to synchronize.</p>]]></content:encoded></item><item><title><![CDATA[Migrate ZFS Pool with Sparse File]]></title><description><![CDATA[<p>I have a ZFS pool containing 8 disks <em>(4&#xD7;2TiB + 2&#xD7;4TiB + 2&#xD7;4TiB)</em> on an 8 bay server:</p><pre><code class="language-csh"> state: DEGRADED
status: ....
config:

        NAME           STATE     READ WRITE CKSUM
        ${SRC_POOL}    DEGRADED     0     0     0
          raidz2-0     DEGRADED     0     0     0
            da0p2      ONLINE       0     0     0
            da1p2      DEGRADED     0</code></pre>]]></description><link>https://blog.sakuragawa.moe/migrate-zfs-pool-with-sparse-file/</link><guid isPermaLink="false">61771f502730030001ea5b8f</guid><category><![CDATA[BSD]]></category><category><![CDATA[Server]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Tue, 26 Oct 2021 09:57:59 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2021/10/progress-bar.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.sakuragawa.moe/content/images/2021/10/progress-bar.png" alt="Migrate ZFS Pool with Sparse File"><p>I have a ZFS pool containing 8 disks <em>(4&#xD7;2TiB + 2&#xD7;4TiB + 2&#xD7;4TiB)</em> on an 8 bay server:</p><pre><code class="language-csh"> state: DEGRADED
status: ....
config:

        NAME           STATE     READ WRITE CKSUM
        ${SRC_POOL}    DEGRADED     0     0     0
          raidz2-0     DEGRADED     0     0     0
            da0p2      ONLINE       0     0     0
            da1p2      DEGRADED     0     0     0
            da2p2      ONLINE       0     0     0
            da3p2      ONLINE       0     0     0
          mirror-1     ONLINE       0     0     0
            da4p2      ONLINE       0     0     0
            da5p2      ONLINE       0     0     0
          mirror-2     ONLINE       0     0     0
            da6p2      ONLINE       0     0     0
            da7p2      ONLINE       0     0     0

errors: No known data errors</code></pre><p>When it comes to the time to replace the warning drive and expand the storage pool, I want to give up the current striped pool and create a new pool containing a single RAID-Z2 vdev <em>(4&#xD7;8TiB)</em>. Now here is the problem:</p><blockquote>How to migrate create a new pool with all 8 bays are occupied?</blockquote><p>The traditional way is to backup the content to external storage and restore. But here is a possible solution without any external storage: creating a degraded pool with minimum devices.</p><h1 id="tricky-drive-replacement">Tricky Drive Replacement</h1><p>At least 2 online devices are required to keep a RAID-Z2 vdev working. So the trick is to pull out 2 drives from the original pool and insert 2 new blank drives.</p><pre><code class="language-figure">+------+------+------+------+     +------+------+------+------+
|        RAID-Z2 4&#xD7;2T       |     | NEW1 | NEW2 |RAID-Z2 2&#xD7;2T |
|------+------+------+------| ==&gt; |------+------+------+------|
| MIRROR 2&#xD7;4T | MIRROR 2&#xD7;4T |     | MIRROR 2&#xD7;4T | MIRROR 2&#xD7;4T |
+------+------+------+------+     +------+------+------+------+</code></pre><p>Now the original pool enters DEGRADED status (it has been in degraded status anyway).</p><h1 id="create-a-degraded-pool">Create a Degraded Pool</h1><p>Create some sparse files as &quot;fake devices&quot;. The size of the sparse files should match the size of the hard drives (for any convenience).</p><pre><code class="language-csh">SPARSE_SIZE=8T
SPARSE_FP_1=/root/sparsefile1
SPARSE_FP_2=/root/sparsefile2
truncate -s ${SPARSE_SIZE} ${SPARSE_FP_1}
truncate -s ${SPARSE_SIZE} ${SPARSE_FP_2}</code></pre><pre><code class="language-csh"># ls -l /root/sparsefile*
-rw-r--r--  1 root  wheel   8.0T Oct 26 04:15 sparsefile1
-rw-r--r--  1 root  wheel   8.0T Oct 26 04:15 sparsefile2</code></pre><p>Create a ZFS Storage Pool at RAID Z2 with 2 hard drives and 2 sparse files:</p><pre><code class="language-csh">zpool create ${POOL_NAME} raidz2 \
	da0p2 da1p2 \
    ${SPARSE_FP_1} \
    ${SPARSE_FP_2}
zfs set aclmode=passthrough ${POOL_NAME}
zfs set compression=lz4 ${POOL_NAME}</code></pre><p>Set the sparse files to offline so that no actual data will be written:</p><pre><code class="language-csh">zpool offline ${POOL_NAME} ${SPARSE_FP_1}
zpool offline ${POOL_NAME} ${SPARSE_FP_2}</code></pre><pre><code class="language-csh">root@storage-01:~ # zpool status SKG_STORAGE_1
  pool: ${POOL_NAME}
 state: DEGRADED
status: ....
config:

        NAME                                            STATE     READ WRITE CKSUM
        ${POOL_NAME}           DEGRADED     0     0     0
          raidz2-0             DEGRADED     0     0     0
            da0p2              ONLINE       0     0     0
            da1p2              ONLINE       0     0     0
            /root/sparsefile1  OFFLINE      0     0     0
            /root/sparsefile2  OFFLINE      0     0     0

errors: No known data errors</code></pre><p>Now we have a degraded pool with 2 actual drives and 2 fake drives. Finally adjust export the pool so that it could be imported in FreeNAS Web UI:</p><pre><code class="language-csh">zpool export ${POOL_NAME}</code></pre><h1 id="take-and-send-snapshots">Take and Send Snapshots</h1><p>Create a snapshot:</p><pre><code class="language-csh">zfs snapshot -r ${SRC_POOL}@migration-base</code></pre><p>Dry-run before sending the actual data and proceed if everything is checked:</p><pre><code class="language-csh">zfs send -Rvn -i ${SRC_POOL}@migration_base
zfs send -Rv -i snapshot ${SRC_POOL}@migration-base | pv | zfs receive -Fsvd ${POOL_NAME}@migration-base</code></pre><h1 id="replace-fake-devices-with-real-ones">Replace Fake Devices with Real Ones</h1><pre><code class="language-csh">zpool replace ${POOL_NAME} ${SPARSE_FP_1} da3p2
zpool replace ${POOL_NAME} ${SPARSE_FP_2} da4p2</code></pre><h1 id="references">References</h1><ul><li><a href="https://www.truenas.com/community/threads/how-do-i-make-a-degraded-2-drive-raidz.69949/">How do I make a degraded 2 drive RAIDZ?</a></li><li><a href="https://www.truenas.com/community/threads/how-to-use-zfs-send-zfs-receive-to-back-up-my-current-pools-to-a-10tb-transfer-disk.59852/">how to use zfs send | zfs receive to back up my current pools to a 10TB transfer disk?</a></li></ul>]]></content:encoded></item><item><title><![CDATA[On CloudFlare CDN Common Errors]]></title><description><![CDATA[CloudFlare provides free CDN with full SSL support. It is powerful and userful. But sometimes one can run into problems with it.]]></description><link>https://blog.sakuragawa.moe/on-cloudflare-common-errors/</link><guid isPermaLink="false">615769bfa1bc260001d15046</guid><category><![CDATA[Cloud]]></category><category><![CDATA[Network]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Fri, 01 Oct 2021 21:02:46 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2021/10/cloudflare-featured.webp" medium="image"/><content:encoded><![CDATA[<img src="https://blog.sakuragawa.moe/content/images/2021/10/cloudflare-featured.webp" alt="On CloudFlare CDN Common Errors"><p>CloudFlare provides free CDN with full SSL support. It is powerful and userful. But sometimes one can run into problems with it.</p><h1 id="toomanyredirects">TOO_MANY_REDIRECTS</h1><p>When CloudFlare CDN is set to <em>on</em>, CloudFlare will be proxying access to the origin server. The connection between CloudFlare and origin server could be either <em>HTTP</em> or <em>HTTPS</em>. This could be controlled in <em>SSL/TLS</em> tab.</p><p>If SSL is set to <em>Flexible</em>, CloudFlare will connect to the origin server via HTTP. Now if origin server is configured to redirect all HTTP access to &#xA0;HTTPS, CloudFlare will insist connecting via HTTP, and after multiple retries this error happens. Under this case, SSL should be set to <em>Full (strict) </em>which prompts CloudFlare to use HTTPS for connection to origin server.</p><h1 id="errsslversionorciphermismatch">ERR_SSL_VERSION_OR_CIPHER_MISMATCH</h1><p>This is caused by Multi-Level Subdomain DNS. CloudFlare free tier provides SSL certificates for <code>example.org</code> and <code>*.example.com</code>. The 4th subdomains will not match any valid certificate due to a limitation in SSL.</p><p>To solve this problem, CloudFlare provides (paid) dedicated certificates. But if you stick with the free plan, you can only use 3rd subdomains with CloudFlare CDN.</p>]]></content:encoded></item><item><title><![CDATA[Containerized WordPress from Sub-directory with Traefik]]></title><description><![CDATA[It becomes so easy to host a WordPress CMS using its official Docker container image. If only 1 domain is available, it might be useful to serve WordPress from subdirectory, i.e https://$SITE_URL/blog/.]]></description><link>https://blog.sakuragawa.moe/containerized-wordpress-from-sub-directory/</link><guid isPermaLink="false">612e2ff6462b360001196256</guid><category><![CDATA[Linux]]></category><category><![CDATA[Network]]></category><category><![CDATA[Getting Started]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Sat, 18 Sep 2021 20:42:04 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2021/09/containerized-wordpress-from-sub-directory.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.sakuragawa.moe/content/images/2021/09/containerized-wordpress-from-sub-directory.png" alt="Containerized WordPress from Sub-directory with Traefik"><p>It becomes so easy to host a WordPress CMS using its <a href="https://hub.docker.com/_/wordpress">official Docker container image</a>. If only 1 domain is available, it might be useful to serve WordPress from subdirectory, i.e <code>https://$SITE_URL/blog/</code>.</p><p>First, follow the official documentation (Method II) to setup WordPress installation in subdirectory.</p><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://wordpress.org/support/article/giving-wordpress-its-own-directory/#method-ii-with-url-change"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Giving WordPress Its Own Directory</div><div class="kg-bookmark-description">Many people want WordPress to power their website&#x2019;s root (e.g. but they don&#x2019;t want all of the WordPress files cluttering up their root directory. WordPress allows you to install it into&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://s.w.org/favicon.ico?2" alt="Containerized WordPress from Sub-directory with Traefik"><span class="kg-bookmark-author">WordPress.org Forums</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://s.w.org/images/backgrounds/wordpress-bg-medblue.png" alt="Containerized WordPress from Sub-directory with Traefik"></div></a><figcaption>Use Method II (With URL Changes) to finish the installation</figcaption></figure><pre><code class="language-bash">$ export SUBPATH=&quot;blog&quot;
$ mv !($SUBPATH) $SUBPATH/
$ cp SUBPATH/index.php index.php
$ sed -i &quot;s/wp-blog-header.php/$SUBPATH\/wp-blog-header.php/g&quot;</code></pre><p>Then configure Traefik label:</p><pre><code class="language-yaml">version: &apos;3&apos;
services:
	wordpress:
    	....
        labels:
            - &apos;traefik.enable=true&apos;
            - &apos;traefik.http.routers.blog.rule=Host(`$SITE_URL`) &amp;&amp; PathPrefix(`/blog`)&apos;
            - &apos;traefik.http.routers.blog.entrypoints=https&apos;
        ....
        
            - &apos;traefik.http.routers.blog.tls=true&apos;</code></pre><h1 id="references">References</h1><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/docker-library/wordpress/issues/388#issuecomment-470745155"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Subdirectory problem (wp-admin redirect) &#xB7; Issue #388 &#xB7; docker-library/wordpress</div><div class="kg-bookmark-description">I have looked through several related issues, but could not find a solution. It only happens when I&amp;#39;m at /wp-admin. I get a sort of &amp;#39;redirect&amp;#39; when I access wp-admin. Seems like a pushS...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Containerized WordPress from Sub-directory with Traefik"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">docker-library</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/a88d150b6ce9f60ce606ec72f2ab13286ec4b47aeb4f357d82c9160a4574c4de/docker-library/wordpress/issues/388" alt="Containerized WordPress from Sub-directory with Traefik"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/docker-library/wordpress/issues/221#issuecomment-356454850"><div class="kg-bookmark-content"><div class="kg-bookmark-title">[URGENT NEED] [Please Help Needed A Lot] - Unable to route nginx to wordpress (fpm or apache) for /subdirectory config &#xB7; Issue #221 &#xB7; docker-library/wordpress</div><div class="kg-bookmark-description">I have been trying this more than 12 hours and still not able to figure the perfect nginx config to route to a /blog wordpress installation. Need this in real urgent basis, have to shift company&amp;#3...</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.com/fluidicon.png" alt="Containerized WordPress from Sub-directory with Traefik"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">docker-library</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/191a9ac8d8aa664390180fa1ccf471ce417828a3e4792740c92c448f0ab98ff1/docker-library/wordpress/issues/221" alt="Containerized WordPress from Sub-directory with Traefik"></div></a></figure><p></p>]]></content:encoded></item><item><title><![CDATA[Elastic + Grafana Data Visualization: Elastic Data Source Notes]]></title><description><![CDATA[Some errors during Elastic data source adapting attempt.]]></description><link>https://blog.sakuragawa.moe/elastic-grafana-data-visualization-elastic-data-source-notes/</link><guid isPermaLink="false">60d05e911c06540001a64559</guid><category><![CDATA[Tooling]]></category><category><![CDATA[Quick Reference Handbook]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Mon, 21 Jun 2021 09:21:00 GMT</pubDate><content:encoded><![CDATA[<h1 id="elastic-limit-of-total-fields-has-been-exceeded">Elastic: Limit of total fields has been exceeded</h1><p>When the fields in a document exceeds the limit of total fields of an <em>index</em>, this error happens.</p><pre><code class="language-json">{
  &quot;index&quot;: &quot;$INDEX&quot;,
  &quot;type&quot;: &quot;_doc&quot;,
  &quot;id&quot;: &quot;675b5f10-dcd2-11ea-80e8-0a580a8002ca&quot;,
  &quot;cause&quot;: {
      &quot;type&quot;: &quot;illegal_argument_exception&quot;,
      &quot;reason&quot;: &quot;Limit of total fields [1000] in index [$INDEX] has been exceeded&quot;
  },
  &quot;status&quot;: 400
}
</code></pre><p>This could be fixed by updating <em>index</em> setting:</p><pre><code class="language-bash">curl --request PUT \
  --url $ES_URL/$INDEX/_settings \
  --header &apos;Content-Type: application/json&apos; \
  --data &apos;{ &quot;index.mapping.total_fields.limit&quot;: 8192 }&apos;
</code></pre><h1 id="elastic-timed-out-while-waiting-for-a-dynamic-mapping-update">Elastic: Timed out while waiting for a dynamic mapping update</h1><pre><code class="language-json">{
  &quot;index&quot;: &quot;$INDEX&quot;,
  &quot;type&quot;: &quot;_doc&quot;,
  &quot;id&quot;: &quot;-iS-Lm8B-sHL5u3eAKZr&quot;,
  &quot;cause&quot;: {
      &quot;type&quot;: &quot;mapper_exception&quot;,
      &quot;reason&quot;: &quot;timed out while waiting for a dynamic mapping update&quot;
  },
  &quot;status&quot;: 500
}
</code></pre><p>This error could be caused by multiple reasons. But if it is simply caused by low performance of the Elastic instance (which is my case), it could be avoided by extend time out length from the default 1 minute. For example, adding the parameter <code>timeout=1h</code> for <em>reindex</em> process:</p><pre><code class="language-bash">curl --request POST \
  --url &apos;$ES_URL/_reindex?timeout=1h&amp;wait_for_completion=false&apos; \
  --header &apos;Content-Type: application/json&apos; \
  --data @payload.json
</code></pre><h1 id="grafana-failed-to-create-query-field-expansion-matches-too-many-fields">Grafana: failed to create query: field expansion matches too many fields</h1><p>When using Elastic as a data source, graph drawing might fail:</p><blockquote>failed to create query: field expansion matches too many fields, limit: 1024, got: 1165</blockquote><p>This is caused by too low <code>indices.query.bool.max_clause_count</code> value which is default to <code>1024</code>.</p><blockquote>This setting limits the number of clauses a Lucene BooleanQuery can have. The default of 1024 is quite high and should normally be sufficient.</blockquote><p>Unfortunately this is a <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html#static-cluster-setting">static cluster setting</a> and could only be configured without stopping a node. Here I need to add an environment variable in <code>docker-compose.yml</code>:</p><pre><code class="language-yaml">version: &apos;2.2&apos;
services:
  es-01:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.7.0
    container_name: es01
    environment:
      ....
      - indices.query.bool.max_clause_count=8192
      ....
  ....
</code></pre><p>And then restart the nodes using <code>docker-compose restart</code> for the configuration to take effects. Now Grafana could show the graph.</p>]]></content:encoded></item><item><title><![CDATA[Change Elastic Index Mappings]]></title><description><![CDATA[Converting and changing the type of a column in database is easy. Unfortunately, the type of a field in a mapping could not be changed in Elastic. If the docs is already indexed, the mapping could not be changed and an reindex-ing would be needed.]]></description><link>https://blog.sakuragawa.moe/change-elastic-index-mappings/</link><guid isPermaLink="false">60cc61c498872c0001c8ae27</guid><category><![CDATA[Getting Started]]></category><category><![CDATA[Tooling]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Fri, 18 Jun 2021 08:46:00 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2021/06/Screenshot_20210621_174643.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.sakuragawa.moe/content/images/2021/06/Screenshot_20210621_174643.png" alt="Change Elastic Index Mappings"><p>Converting and changing the type of a column in database is easy. Unfortunately, the type of a field in a mapping could not be changed in Elastic. If the docs is already indexed, the mapping could not be changed and an reindex-ing would be needed.</p><h1 id="dynamic-mapping-failed-grafana">Dynamic Mapping Failed Grafana</h1><p>Elastic could be used as a data source for Grafana. But if there exists no field of <code>date</code> type in a <em>index</em> for <em>Time field name</em>, the data source could not be configured properly.</p><p>In my case, unfortunately, the type of <code>timestamp</code> field is set to <code>text</code> during <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/explicit-mapping.html#update-mapping">dynamic mapping</a> process due to inrecognizable time format (requires <code>yyyy-MM-dd&apos;T&apos;HH:mm:ss.SSSSSSZ</code> but gets no &quot;T&quot; as a delimiter).</p><p>Explained in <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-mapping.html">official documentation</a>:</p><blockquote>Except for supported mapping parameters, you can&#x2019;t change the mapping or field type of an existing field. Changing an existing field could invalidate data that&#x2019;s already indexed.</blockquote><blockquote>If you need to change the mapping of a field in other indices, create a new index with the correct mapping and reindex your data into that index.</blockquote><p>Elasticsearch does not provide the functionality to change types for existing <em>fields</em>. A workaround is to import the data to a new <em>index</em> with correct types:</p><ol><li>create a new <em>index</em></li><li>put updated <em>mapping</em> (<a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html">create manually a <em>mapping</em></a> where fields have the types you need before sending the first data) to the new <em>index</em></li><li>use <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-reindex.html">reindex API</a> to copy data from the old <em>index</em> to the newly created one</li><li>(optionally) create an <em>alias</em> with the old <em>index</em> name pointing to the new <em>index</em></li></ol><h1 id="construct-explicit-mapping-request">Construct Explicit Mapping Request</h1><p>First we should know the old <em>index</em> mapping and find out what is wrong:</p><pre><code class="language-bash">$ curl &quot;$ES_URL/$INDEX/_mapping/field/timestamp?pretty&quot;</code></pre><pre><code class="language-json">{
  &quot;testrun&quot;: {
    &quot;mappings&quot;: {
      &quot;timestamp&quot;: {
        &quot;full_name&quot;: &quot;timestamp&quot;,
        &quot;mapping&quot;: {
          &quot;timestamp&quot;: {
            &quot;type&quot;: &quot;text&quot;,
            &quot;fields&quot;: {
              &quot;keyword&quot;: {
                &quot;type&quot;: &quot;keyword&quot;,
                &quot;ignore_above&quot;: 256
              }
            }
          }
        }
      }
    }
  }
}
</code></pre><p>We can clearly see a <code>timestamp</code> <em>field</em> with <code>text</code> type. This is what to expect, as Grafana requires <code>date</code> type. So we can construct a <code>payload.json</code> with explicit mapping:</p><figure class="kg-card kg-code-card"><pre><code class="language-json">{
  &quot;properties&quot;: {
    &quot;timestamp&quot;: {
      &quot;type&quot;: &quot;date&quot;,
      &quot;format&quot;: &quot;yyyy-MM-dd HH:mm:ss.SSSSSSZZZZZ&quot;
    }
  }
}
</code></pre><figcaption>payload.json</figcaption></figure><p>Here we have a <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping-date-format.html#custom-date-formats">custom date format</a> because our raw data is in non-standard format like <code>2017-11-30 11:59:19.717648+00:00</code>.</p><p>Then we will create a new <em>index</em> and put an explicit mapping into it.</p><pre><code class="language-bash">$ curl -X PUT $ES_URL/$INDEX_NEW
$ curl -X PUT $ES_URL/$INDEX_NEW/_mapping -H &quot;Content-Type: application/json&quot; -d @payload.json
</code></pre><h1 id="reindex-from-old-index-to-new">Reindex from Old <em>Index</em> to New</h1><h2 id="reindex-api">Reindex API</h2><p>Again we can create a <code>payload.json</code>:</p><figure class="kg-card kg-code-card"><pre><code class="language-json">{
  &quot;source&quot;: {
    &quot;index&quot;: &quot;$INDEX&quot;
  },
  &quot;dest&quot;: {
    &quot;index&quot;: &quot;$INDEX_NEW&quot;
  }
}
</code></pre><figcaption>payload.json</figcaption></figure><pre><code class="language-bash">$ curl -X POST &quot;$ES_URL/_reindex?pretty&quot; -H &apos;Content-Type: application/json&apos; -d @payload.json
</code></pre><p>If no errors are returned, reindex process is started. The process will take from minutes to hours, depending on the quantity of data. But by now Grafana should recognize the <em>field</em> and shows</p><blockquote>Index OK. Time field name OK.</blockquote><p>And the data source is sucessfully configured.</p><h2 id="reindex-asynchronously">Reindex Asynchronously</h2><p>Reindexing a huge index could take hours to days and will result in an HTTP request timeout. To make sure</p><blockquote>If the request contains <code>wait_for_completion=false</code>, Elasticsearch performs some preflight checks, launches the request, and returns a task you can use to cancel or get the status of the task. Elasticsearch creates a record of this task as a document at <code>_tasks/&lt;task_id&gt;</code></blockquote><pre><code class="language-bash">$ curl -X POST &quot;$ES_URL/_reindex?wait_for_completion=false&quot; -H &apos;Content-Type: application/json&apos; -d @payload.json
</code></pre><p>This will returns a <em>tasks</em> represented by its ID:</p><pre><code class="language-json">{ &quot;task&quot;: &quot;Sb_1wXmiTWWY2uFEczPhIQ:868863502&quot; }
</code></pre><p>And this task ID could be used to query the process:</p><pre><code class="language-bash">$ export TASK_ID=&quot;Sb_1wXmiTWWY2uFEczPhIQ:868863502&quot;
$ curl &quot;$ES_URL/_tasks/$TASK_ID?pretty&quot;
</code></pre><p>Any warning or error will be included in the query result. When you confirm the process is done and task logs are no longer needed, this <em>task</em> should be deleted. The <em>tasks</em> are store in an intenral <em>index</em> <code>.tasks</code>, and it is where the <em>task document</em> should be deleted from.</p><pre><code class="language-bash">$ curl -X DELETE &quot;$ES_URL/.tasks/task/$TASK_ID&quot;</code></pre><h1 id="how-about-the-old-index">How About the Old <em>Index</em>?</h1><p>There is no <em>index</em> renaming operation in Elastic. But by the experience we&apos;ve gained, it is more eaiers to rename one by deleting, creating and reindexing.</p><p>There is also an option to <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-aliases.html">create alias</a> for the <em>index</em>.</p>]]></content:encoded></item><item><title><![CDATA[MultiGarment-Network under Conda Environment]]></title><description><![CDATA[How to configure Conda environments for MultiGarment-Network (both Python 2 & 3).]]></description><link>https://blog.sakuragawa.moe/run-multigarment-network-under-conda/</link><guid isPermaLink="false">60b52e95b622f400010ff12d</guid><category><![CDATA[Linux]]></category><category><![CDATA[Python]]></category><category><![CDATA[TensorFlow]]></category><dc:creator><![CDATA[Sakuragawa Asaba]]></dc:creator><pubDate>Wed, 02 Jun 2021 03:03:11 GMT</pubDate><media:content url="https://blog.sakuragawa.moe/content/images/2021/06/teaser_4.png" medium="image"/><content:encoded><![CDATA[<figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://virtualhumans.mpi-inf.mpg.de/mgn/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Multi-Garment Net: Learning to Dress 3D People from Images</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"></div></div><div class="kg-bookmark-thumbnail"><img src="https://virtualhumans.mpi-inf.mpg.de/mgn/teaser_4.png" alt="MultiGarment-Network under Conda Environment"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://arxiv.org/abs/1908.06903"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Multi-Garment Net: Learning to Dress 3D People from Images</div><div class="kg-bookmark-description">We present Multi-Garment Network (MGN), a method to predict body shape andclothing, layered on top of the SMPL model from a few frames (1-8) of a video.Several experiments demonstrate that this representation allows higher level ofcontrol when compared to single mesh or voxel representations of s&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://static.arxiv.org/static/browse/0.3.2.6/images/icons/favicon.ico" alt="MultiGarment-Network under Conda Environment"><span class="kg-bookmark-author">arXiv.org</span><span class="kg-bookmark-publisher">Bharat Lal Bhatnagar</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://static.arxiv.org/static/browse/0.3.2.6/images/icons/social/bibsonomy.png" alt="MultiGarment-Network under Conda Environment"></div></a></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/bharat-b7/MultiGarmentNetwork"><div class="kg-bookmark-content"><div class="kg-bookmark-title">bharat-b7/MultiGarmentNetwork</div><div class="kg-bookmark-description">Repo for &#x201C;Multi-Garment Net: Learning to Dress 3D People from Images, ICCV&#x2032;19&#x201D; - bharat-b7/MultiGarmentNetwork</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="MultiGarment-Network under Conda Environment"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">bharat-b7</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/af678a5754a27714cc5eb8d6704defc814e2682b5064df5bf93d7a6d9d7deed0/bharat-b7/MultiGarmentNetwork" alt="MultiGarment-Network under Conda Environment"></div></a></figure><img src="https://blog.sakuragawa.moe/content/images/2021/06/teaser_4.png" alt="MultiGarment-Network under Conda Environment"><p>Basic environment: CentOS 7, NVIDIA drivers (CUDA 10.1.243), Conda 4.10.</p><blockquote>I hate Conda!</blockquote><h1 id="python-2-environment">Python 2 Environment</h1><p>First attempt is with Python 2.7 since the readme says</p><blockquote>The code has been tested in python 2.7, Tensorflow 1.13</blockquote><p>Create Conda environment:</p><pre><code class="language-shell">$ conda create -n mgn-py27 python=2.7 cudatoolkit=10.1 cudnn=7.6.5
$ conda activate mgn-py27</code></pre><p><code>MultiGarment-Network</code> has 2 dependencies that we need to install manually.</p><h2 id="install-dirt">Install dirt</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/pmh47/dirt"><div class="kg-bookmark-content"><div class="kg-bookmark-title">pmh47/dirt</div><div class="kg-bookmark-description">DIRT: a fast differentiable renderer for TensorFlow - pmh47/dirt</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="MultiGarment-Network under Conda Environment"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">pmh47</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/2b9e023b10302e8c889036fb68ffd58205b89b82f67d92296d896864d051cd8b/pmh47/dirt" alt="MultiGarment-Network under Conda Environment"></div></a></figure><p>Install dependencies:</p><pre><code class="language-shell">(mgn-py27)$ conda install cmake gcc_linux-64
(mgn-py27)$ conda install tensorflow-gpu=1.13</code></pre><p>But <code>pip list</code> is not showing <code>tensorflow-gpu</code> package. So we need to install it again from <em>pypi</em> before building and installation:</p><pre><code class="language-shell">(mgn-py27)$ pip install --ignore-installed tensorflow==1.13.1
(mgn-py27)$ pip install .</code></pre><h2 id="install-mesh">Install Mesh</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/MPI-IS/mesh"><div class="kg-bookmark-content"><div class="kg-bookmark-title">MPI-IS/mesh</div><div class="kg-bookmark-description">MPI-IS Mesh Processing Library. Contribute to MPI-IS/mesh development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="MultiGarment-Network under Conda Environment"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">MPI-IS</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/75b79307a6ecc8d5126d3fdd61c96cc263a9d6590952120d440382194d88c87c/MPI-IS/mesh" alt="MultiGarment-Network under Conda Environment"></div></a></figure><pre><code class="language-shell">(mgn-py27)$ conda install gxx_linux-64 opencv
(mgn-py27)$ git checkout 1761d544686b3735991954947a8befa759891eb4
(mgn-py27)$ make
(mgn-py27)$ cd dist &amp;&amp; pip install psbody_mesh-0.1-cp27-cp27mu-linux_x86_64.whl</code></pre><h2 id="run-multigarment-network">Run MultiGarment-Network</h2><pre><code class="language-shell">(mgn-py27)$ conda install matlabplot
(mgn-py27)$ pip install &quot;scikit-learn&lt;0.18&quot; chumpy
(mgn-py27)$ python test_network.py
Using dirt renderer.
....
Done
freeglut (mesh_viewer):  ERROR:  Internal error &lt;FBConfig with necessary capabilities not found&gt; in function fgOpenWindow</code></pre><hr><h1 id="python-3-environment-wip">Python 3 Environment (WIP)</h1><p>Create a Conda environment:</p><pre><code class="language-shell">$ conda create -n mgn-py36 python=3.6 cudatoolkit=10.0 cudnn=7.6.5
$ conda activate mgn-py36</code></pre><h2 id="install-dirt-1">Install dirt</h2><p>Note the <a href="https://github.com/pmh47/dirt#troubleshooting">troubleshooting section</a> in the description:</p><blockquote>If you are using TensorFlow 1.14, there are some binary compatibility issues when using older versions of python (e.g. 2.7 and 3.5), due to compiler version mismatches. These result in a segfault at <code>tensorflow::shape_inference::InferenceContext::GetAttr</code> or similar. To resolve, either upgrade python to 3.7, or downgrade TensorFlow to 1.13, or build DIRT with gcc 4.8</blockquote><p>Thus we will use an older version 1.13 of Tensorflow to avoid this known issue. And since the package from <code>conda-forge</code> messes up the dependencies, we must specify channel to <code>anaconda</code>:</p><pre><code class="language-shell">(mgn-py36)$ conda install -c anaconda tensorflow-gpu=1.13
(mgn-py36)$ pip install tensorflow-gpu==1.13.1</code></pre><p>Then install rest dependencies for building:</p><pre><code class="language-shell">(mgn-py36)$ conda install cmake gcc_linux-64</code></pre><p>Finally build and install <code>dirt</code>, but with some environment variables:</p><pre><code class="language-shell">(mgn-py36)$ export CUDA_HOME=/usr/local/cuda-10.1
(mgn-py36)$ export PATH=$CUDA_HOME/bin:$PATH
(mgn-py36)$ pip install .
(mgn-py36)$ python tests/square_test.py
....
successful: all pixels agree</code></pre><h2 id="install-mesh-1">Install Mesh</h2><pre><code class="language-shell">(mgn-py36)$ conda install boost
(mgn-py36)$ conda install pyopengl pillow pyzmq pyyaml
(mgn-py36)$ conda install gxx_linux-64
(mgn-py36)$ make all
(mgn-py36)$ make tests
....
Ran 28 tests in 5.882s</code></pre><h2 id="convert-multigarment-network-to-python-3">Convert MultiGarment-Network to Python 3</h2><pre><code class="language-shell">(mgn-py36)$ conda install matplotlib scikit-learn==0.17 chumpy</code></pre><p>Since we are running Python 3, we will need to replace <code>cPickle</code> with <code>_pickle</code> as the former does not exist. </p><pre><code class="language-shell">(mgn-py36)$ find ./ -type f -name \*.py -exec sed -i -e &apos;s/cPickle/_pickle/g&apos; {} \;
(mgn-py36)$ 2to3 -w -n MultiGarmentNetwork/</code></pre><p>Due to some incompatibility, Pickle un-serialization will fail:</p><pre><code class="language-python">Traceback (most recent call last):
  File &quot;test_network.py&quot;, line 171, in &lt;module&gt;
    _, faces = pkl.load(f)
UnicodeDecodeError: &apos;ascii&apos; codec can&apos;t decode byte 0x8c in position 16: ordinal not in range(128)</code></pre><p>You will need to change the Pickle loading to</p><pre><code class="language-python">_, faces = pkl.load(open(file_path, &apos;rb&apos;), encoding=&apos;latin1&apos;)</code></pre><p>A too-old <code>scikit-learn</code> is also a problem:</p><pre><code class="language-python">Traceback (most recent call last):
  File &quot;test_network.py&quot;, line 176, in &lt;module&gt;
    pca_verts[garment] = pkl.load(f)
  File &quot;$CONDA_HOME/lib/python3.6/site-packages/sklearn/decomposition/__init__.py&quot;, line 10, in &lt;module&gt;             
    from .kernel_pca import KernelPCA
  File &quot;$CONDA_HOME/lib/python3.6/site-packages/sklearn/decomposition/kernel_pca.py&quot;, line 13, in &lt;module&gt;           
    from ..metrics.pairwise import pairwise_kernels
  File &quot;$CONDA_HOME/lib/python3.6/site-packages/sklearn/metrics/__init__.py&quot;, line 33, in &lt;module&gt;                   
    from . import cluster
  File &quot;$CONDA_HOME/lib/python3.6/site-packages/sklearn/metrics/cluster/__init__.py&quot;, line 8, in &lt;module&gt;
    from .supervised import adjusted_mutual_info_score
  File &quot;$CONDA_HOME/lib/python3.6/site-packages/sklearn/metrics/cluster/supervised.py&quot;, line 14, in &lt;module&gt;         
    from scipy.misc import comb
ImportError: cannot import name &apos;comb&apos;</code></pre><p>Changing to <code>from scipy.special import comb</code> will solve the problem.</p><p>Until now the network should run without syntax error. <a href="https://github.com/bharat-b7/MultiGarmentNetwork/issues/17#issuecomment-603580207">Some report</a> that the network could run after the above changes.</p><p>But here we will still get this error:</p><pre><code class="language-python">Traceback (most recent call last):
  File &quot;test_network.py&quot;, line 185, in &lt;module&gt;
    pred = get_results(m, dat)
  File &quot;test_network.py&quot;, line 55, in get_results
    out = m([images, vertex_label, J_2d])
  File &quot;$CONDA_PREFIX/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py&quot;, line 592, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File &quot;/public/wl4/clothing/MultiGarmentNetwork-py3/network/base_network.py&quot;, line 336, in call
    garm_model_outputs = [fe(latent_code_offset_ShapeMerged) for fe in self.garmentModels]
  File &quot;/public/wl4/clothing/MultiGarmentNetwork-py3/network/base_network.py&quot;, line 336, in &lt;listcomp&gt;
    garm_model_outputs = [fe(latent_code_offset_ShapeMerged) for fe in self.garmentModels]
  File &quot;$CONDA_PREFIX/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py&quot;, line 592, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File &quot;/public/wl4/clothing/MultiGarmentNetwork-py3/network/base_network.py&quot;, line 65, in call
    x = self.PCA_(pca_comp)
  File &quot;$CONDA_PREFIX/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py&quot;, line 592, in __call__
    outputs = self.call(inputs, *args, **kwargs)
  File &quot;/public/wl4/clothing/MultiGarmentNetwork-py3/network/custom_layers.py&quot;, line 33, in call
    return tf.reshape(tf.matmul(x, self.components) + self.mean, (-1, K.int_shape(self.mean)[0] / 3, 3))
  File &quot;$CONDA_PREFIX/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py&quot;, line 7161, in reshape
    tensor, shape, name=name, ctx=_ctx)
  File &quot;$CONDA_PREFIX/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py&quot;, line 7206, in reshape_eager_fallback
    ctx=_ctx, name=name)
  File &quot;$CONDA_PREFIX/lib/python3.6/site-packages/tensorflow/python/eager/execute.py&quot;, line 66, in quick_execute
    six.raise_from(core._status_to_exception(e.code, message), None)
  File &quot;&lt;string&gt;&quot;, line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: Value for attr &apos;Tshape&apos; of float is not in the list of allowed values: int32, int64
        ; NodeDef: {{node Reshape}}; Op&lt;name=Reshape; signature=tensor:T, shape:Tshape -&gt; output:T; attr=T:type; attr=Tshape:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]&gt; [Op:Reshape]</code></pre><p>There is also a repository containing Python 3 version of MultiGarment-Network:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/minar09/MGN-Py3"><div class="kg-bookmark-content"><div class="kg-bookmark-title">minar09/MGN-Py3</div><div class="kg-bookmark-description">Multi-Garment Network implementation for Python3. Contribute to minar09/MGN-Py3 development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="MultiGarment-Network under Conda Environment"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">minar09</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/e0ed98d4edf52ea20ce8fee025654d4b584906ddd111decfac73f453baa5d269/minar09/MGN-Py3" alt="MultiGarment-Network under Conda Environment"></div></a></figure>]]></content:encoded></item></channel></rss>