Bashing my BashRC – Productivity Fridays


Welcome to Productivity Friday. This series will post on a Friday and, at least for the time being, will be every other Friday. In the series I will be diving into tips and tricks within my workflow. While mainly technical I’m hoping the series is broad enough to be interesting to developers and system administrators. 

Bashing Bash

So this week we will be looking at my bashrc file…

Tim, you use ZSH, we know that, you are a Mac-using coffee-sipping hipster. Yep, you got me on servers I fully control and my own machine, I do indeed use zsh however for work I’m often running on servers where zsh is not available and in those scenarios I have a minimal bash script and a few programs I will place in my own home directory to keep me going.

I have a very tiny build script that takes my dotfiles from version control and a couple of directories and tars the files up, and uploads them to a web-accessible URL.

On a server I wish to add my bash profile to I simply make sure I’m in my home direct, wget/curl the tar file and unpack. 

The result means, on the whole, I share a similar Bash and ZSH feeling, particularly when it comes to aliases.

What does my Home directory look like?


Bashrc or bash_profile?

So remember back to the post about Nanrc and naming conventions and all that good stuff? Yes. well think of them more as guidelines and actually nothing works as you expect. In most servers if you are using bash you will find two files .bashrc and .bash_profile and if you add content in one or the other it both works.

So you add your aliases in .bash_profile it works fine…

Ah no, you see it does work fine, when you access the terminal via your login. However if you have a separate session that didn’t occur at login then bash_profile is not run for that session.

I see the confusion let me give you an example:

I ssh into, I “login” and get a tty my bash_profile is loaded. HAPPY DAYS

I then run the “screen” command and a new session is created within screen but I didn’t login so bash_profile didn’t run!

But bashrc did run…

But it didn’t run at login, except chances are, on your system, it did.

Yep, it ran on my system, why?

If we open your .bash_profile I’m willing to bet unless you have been editing it that you have something like:

if [ -f ~/.bashrc ]; then
 . ~/.bashrc

Which is basically, go run .bashrc

Assuming this is the case we can put things in our bashrc file.

Hi MacOS users still using bash

Yeah, so Mac is a little less posix friendly, or rather terminal is. As a consequence, in terminal on a mac, .bash_profile always runs meaning, well, having bashrc becomes less of a thing. To be honest, for pure simplicity, I recommend just using bashrc anyway and adding the code from above.

Bashrc File

My bashrc file is split into sections and these are stored as .bashfiles in .basrc.d directory


My bashrc file  primary looks like:

# Aliases
for file in ~/.bashrc.d/aliases/*.bashrc;
     source “$file”

# Commands
for file in ~/.bashrc.d/commands/*.bashrc;
    source “$file”

# Misc
for file in ~/.bashrc.d/misc/*.bashrc;
     source “$file”

#Lets get going!
export EDITOR=nano
export VISUAL=nano
export PATH
#Local File for Local People
if [ -f ~/.localrc ]; then 
  source ~/.localrc

So, let’s face it, that was underwhelming and that’s because all the fun interesting stuff is in separate files. The nice thing about doing it this way is I can change things depending on server.or example my .bashrc.d/commands/centos6.bashrc looks like:

 systemctl () {
echo "!!!CENTOS6 TRY AGAIN!!!"
sudo service ${2} ${1}

As I quite often switch between Centos 8/7/6, and in Centos6 SystemD is not a thing, yet my muscle memory is now very much systemd.

Before wandering too far into the individual parts of my basrc file it’s worth just stopping at my /bin folder. This is where I keep a bunch of small utility programs that may or may not be on a system; normally at minimum in there is:

  • nano – 4+ As centOS and Ubuntu ship with ancient versions
  • FZF – Because everyone should fuzzy find stuff, this almost certainly is going to be its own Productivity Friday post or part of.
  • Ccat – Because I totally need colour… it does actually make quite a big difference.
  • Colorls – No seriously, I like colour
  • Grep – similar to nano an up to date version because grep can be ancient on some systems
  • JQ – a small command line tool for manipulating json
  • Htop – Yes you are probably spotting a pattern

Then a bunch of tooling as needed or developed that is usually server or job specific. The difference between bin/ and scripts/ is bin/ contains well binary, whereas scripts can be anything. The bin directory is in my PATH whereas the scripts directory is not.

So back to the bashrc.d files


Aliases are stored in .bashrc/aliases

I have some common aliases that go on every machine and then some that are dependent on the type of server I’m on

My nav.bashrc is pretty basic:

alias cd..='cd ../'
alias ..='cd ../'
alias ...='cd ../../'
alias ....='cd ../../../'
alias ~='cd ~'
alias vhost='cd /var/www/vhosts/'

This is supplemented by my md & gd commands which we’ll get to later. 

But allows quickly moving around directories, I also have shorthand aliases to a few locations I commonly use for example just typing vhost takes me to /var/www/vhosts/ directory.

Next up are my sudo aliases which I know a few people are going to squirm over

alias sudo='sudo env "PATH=$PATH"'
alias snano='sudo nano --rcfile=/home/tnash/.nanorc'
alias please='sudo $(history -p !!)'
alias redo='sudo $(history -p !!)'

Adding my PATH as an environment variable is basically to allow me to access my bin/ folder. I could just add it to the existing path, but not easily, so this is a safe-ish way. The moment I spawn another session, say a screen session, I lose that in that session, but for my purposes it does what I need to do.

Other than that, snano, just short cut for nano, again I load in my .nanorc file just to keep things matching. Then please/redo I use a reasonable amount, I tend to say please as it tickles me which simply repeats the last command but this time with sudo in front.

#Common Tools
alias ls="colorls -A" 
alias lll='ls -ahl'
alias grep='grep --color=auto'
alias df='df -Ph | column -t'
alias q='exit'
alias mkdir='mkdir -pv'
     take () {
mkdir -pv $1
        cd $1
alias cat='ccat --bg=dark'
alias qedit=’nano --tempfile --backup’

My common tools, mainly just a few presets to get colour working. 

In addition df just set to use columns and a sensible measurement of unit for the 2000s.

Mkdir, set so it will automatically create missing directories. 

q as a quick exist.

Really take should be in the command section but it is a complete clone of ohmyzsh take which creates a directory and then changes into that directory.

#Finding things
alias tail='tail -n25'
alias head='head -n25'
alias stail='sudo tail -n25'
alias shead='sudo head -n25'
alias tailf='tail -f'
alias logs="sudo find /var/log -type f -exec file {} \; | grep 'text' | cut -d' ' -f1 | sed -e's/:$//g' | grep -v '[0-9]$' | xargs /tail -f"

My finding things file has some larger presets for tail & head as well as their sudo equivalents, also tailf, because 2 extra keyboard taps.

One useful command logs, just tails every log file in /var/logs

After that my aliases vary depending on host, for example on machines running openvz I have a bunch of aliases surrounding vzctl and vzlist, because I’m often going in and out of vzctl containers I have a very minified version of this bashrc file (with just a few commands and no colour :( ) which I will load into the container.  

# OpenVZ Specific
alias vzctl='sudo vzctl'
alias vze='sudo vzctl enter'
alias vzrestart='sudo vzctl restart'
    vzall () {
          for ctid in `sudo vzlist -H -o ctid`; do sudo vzctl exec2 $ctid "${1}" && echo $ctid; done;
vzenter () {
     if [ ${1} ]; then
         sudo cp ~/.vz_bashrc /vz/root/$1/tmp/tnash_bashrc;
         sudo vzctl enter $1 --exec 'source /tmp/tnash_bashrc';
         sudo rm /vz/root/$1/tmp/tnash_bashrc;

I have similar for LXC and K8s systems I manage.

My commands, vary but my main ones inaddition to take and couple of others already mentioned.

# -----------------------------------------------------
# extract:  Extract most know archives with one command
# -----------------------------------------------------
    extract () {
        if [ -f $1 ] ; then
          case $1 in
            *.tar.bz2)   tar xjf $1   ;;
            *.tar.gz)    tar xzf $1   ;;
            *.bz2) bunzip2 $1     ;;
            *.rar) unrar e $1     ;;
            *.gz)        gunzip $1 ;;
            *.tar) tar xf $1 ;;
            *.tbz2) tar xjf $1     ;;
            *.tgz) tar xzf $1     ;;
            *.zip) unzip $1 ;;
            *.Z)         uncompress $1  ;;
            *.7z)        7z x $1   ;;
            *)     echo "'$1' cannot be extracted via extract()" ;;
             echo "'$1' is not a valid file"

Means I can just go extract /file.tar.gz or indeed a range of options and off it goes and extracts.

The next 3 are really a replacement for Push and Pop commands 

# -------------------------------
# Mark directory for quick access
# -------------------------------
md() {
    # Accept up to one argument
    if [ "$#" -gt 1 ] ; then
        printf >&2 'md(): Too many arguments\n'
        return 2
    # If argument given, change to it in subshell to get absolute path.
    # If not, use current working directory.
    if [ -n "$1" ] ; then
        set -- "$(cd -- "$1" && printf '%s/' "$PWD")"
        set -- "${1%%/}"
        set -- "$PWD"
    # If that turned up empty, we have failed; the cd call probably threw an
    # error for us too
    [ -n "$1" ] || return
    # Save the specified path in the marked directory var
    # shellcheck disable=SC2034

Is used to mark a directory for example md httpdocs/wp-content/plugin/pluginwasworkingon/

# ----------------------
# Go to Marked Directory
# ----------------------
gd() {
    # Refuse to deal with unwanted arguments
    if [ "$#" -gt 0 ] ; then
        printf >&2 'gd(): Unspecified argument\n'
        return 2
    # Complain if mark not actually set yet
    if [ -z "$PMD" ] ; then
        printf >&2 'gd(): Mark not set\n'
        return 1
    # Go to the marked directory
    # shellcheck disable=SC2164
    cd -- "$PMD"

Gd will then go to that directory so if I navigate away all I need to do is gd and I will be back at /var/www/vhosts/httpdocs/wp-content/plugin/pluginwasworkingon/

# ----------------------
# Print Marked Directory
# ----------------------
pmd() {
    if [ -z "$PMD" ] ; then
        printf >&2 'pmd(): Mark not set\n'
        return 1
    printf '%s\n' "$PMD"

Just prints out what the current marked directory is. This is simple but works really well.

# ----------------------------
# Interact with Pipes via Nano
# Based on
# ----------------------------
vipe() {
    if [ ${1} ]; then
   case "${1}" in
     echo "usage: vipe [-hV]"
     exit 0 ;;
     echo "$VERSION"
     exit 0 ;;
   *)    echo "unknown option: \"${1}\""
   echo "usage: vipe [-hV]"
     exit 1
# temp file
touch $t
# read from stdin
if [ ! -t 0 ]; then
   cat > $t
# spawn editor with stdio connected
${EDITOR} $t < /dev/tty > /dev/tty || exit $?
# write to stdout
cat $t
# cleanup
rm $t

Vipe is actually a command line tool that’s part of a package, however this is a simplified shell script. It allows you to manipulate data as it’s passing through the pipe “vipe is vim in pipe” but I use nano, I didn’t rename it nipe because that sounds weird.

So you would do something like:

cat example.log | vipe | wc -l

So you concatenate the file, it opens in the editor, on save it performs a line count. More importantly if I don’t save, then the pipe chain terminates. This is useful as it means your editor can act as a breakpoint in your pipe.

The last few bits are in my misc folder

# ---------------------
# History Configuration
# ---------------------
# Ignore empty & duplicate content
# Change Date format
export HISTTIMEFORMAT="[%Y-%m-%d %T] "
# ignore ls,ll,exit,q and history commands
export HISTIGNORE="&:ls:ll:lll:pwd:exit:q:history:grep"
alias hgrep='history | grep '

Pretty standard configuration for history HISTIGNORE is mainly getting rid of background noise such as looking at history. Hgrep one of those useful commands, you don’t want to admit to anyone you use as much as you do, which is why it’s ignored in my history.

# ---------------
# Auto Completion
# ---------------
if [ -f /etc/bash_completion ] && ! shopt -oq posix; then
    . /etc/bash_completion
for file in ~/.bashrc.d/completion/*.bashrc;
source “$file”

Load in auto completion I tab tab all the time and am used to auto completing, it saves so much time. I know this makes me a lazy operator but I like to think it also makes me a smart one.

# -------------------
# ITerm 2 Integration
# -------------------
if [ $ITERM_SESSION_ID ]; then
  export PROMPT_COMMAND='echo -ne "\033];${PWD##*/}\007"; ':"$PROMPT_COMMAND";
source ~/.iterm2_shell_integration.bash

The final bit is integration with iterm2 to allow coroutines and some basic information to be passed to the terminal.

All that’s left is to add the path (normally set in .bash_profile rather then .bashrc but included here for sake of completeness)

# ---------------------
# Environment variables
# ---------------------
export TERM=xterm-256color
export EDITOR=nano
export VISUAL=nano
export PATH

And that’s it my bashrc and bash_profile 

Customising BashRC

My BashRC files and bashrc.d is really pretty minimal, I have been toying with making additional changes to simplify it, one of which is to replace the bashrc.d folder and instead run a build script to compile down files I need into a single bashrc I already have this to a certain extent with my in container drop in files. My thinking is then I would just have a small API endpoint and my curl brings in the aliases/commands/ it needs something like

/bashrc/?aliases=centos6,openvz&command=mark,vipe,systemd which generates a single basrc file.

Want to learn more?

This post is from a series called Productivity Fridays, here is the series so far:

Help others find this post:

This post was written by Me, Tim Nash I write and talk about WordPress, Security & Performance.
If you enjoyed it, please do share it!

Helping you and your customers stay safe

WordPress Security Consulting Services

Power Hour Consulting

Want to get expert advice on your site's security? Whether you're dealing with a hacked site or looking to future-proof your security, Tim will provide personalised guidance and answer any questions you may have. A power hour call is an ideal starting place for a project or a way to break deadlocks in complex problems.

Learn more

Site Reviews

Want to feel confident about your site's security and performance? A website review from Tim has got you covered. Using a powerful combination of automated and manual testing to analyse your site for any potential vulnerabilities or performance issues. With a comprehensive report and, importantly, recommendations for each action required.

Learn more

Code Reviews

Is your plugin or theme code secure and performing at its best? Tim provides a comprehensive code review, that combine the power of manual and automated testing, as well as a line-by-line analysis of your code base. With actionable insights, to help you optimise your code's security and performance.

Learn more

Or let's chat about your security?

Book a FREE 20 minute call with me to see how you can improve your WordPress Security.

(No Strings Attached, honest!)