`zfs replace` keeps giving write errors with new drives & different SATA cables

Links to reddit and SU.

UPDATE Maybe it’s a firmware bug that the Seagate Exos X20 ST18000NM003D 18TB drive cannot use 512B sectors on ThinkStation S30 / Ubuntu 22.04 5.15.0-88-generic. I ended up destroying the pool and recreating it using 4K sectors and the errors have not returned.

I replaced a failing hard drive (increasing error counts in SMART, read/write/checksum errors in zpool status) in a ZFS raidz1 pool with a new one by running zfs offline and zfs replace. Resilvering started shortly, but keeps giving me write errors (please see below). When these resilvering processes eventually finished, the pool would remain in DEGRADED stated. If I zpool clear the errors or reboot, another resilvering process would start automatically, but giving a potentially different number of write errors.

SMART shows no error for this new drive. I also tried a different new drive (bought two replacements) and swapping SATA cables. It’s always this replacement drive giving the write errors during resilvering. This made me suspect that the ZFS pool is somehow compromised, but it continues to run and is able to zfs send every night.

What’s the right way to troubleshoot and resolve this issue (e.g., like zfs scrub a DEGRADED pool with just three of the four drives, since zfs replace cannot complete without errors)? I do have backups made by zfs send/receive which are hopefully good copies (does ZFS checksum received streams?).

  pool: space
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Nov  3 09:56:59 2023
        2.92T scanned at 594M/s, 2.63T issued at 536M/s, 5.90T total
        669G resilvered, 44.59% done, 01:46:40 to go
config:

        NAME                                     STATE     READ WRITE CKSUM
        space                                    DEGRADED     0     0     0
          raidz1-0                               DEGRADED     0     0     0
            ata-ST3000DM001-1CH166_ZZZZ1111      ONLINE       0     0     0  block size: 512B configured, 4096B native
            ata-ST3000DM001-1CH166_ZZZZ2222      ONLINE       0     0     0  block size: 512B configured, 4096B native
            replacing-2                          UNAVAIL      0     0     0  insufficient replicas
              13284017409215481231               OFFLINE      0     0     0  was /dev/disk/by-id/ata-ST3000DM001-1CH166_Z1F12QMZ-part1
              ata-ST18000NM003D-3DL103_YYYY1111  FAULTED      0 4.38K     0  too many errors  (resilvering)
            ata-ST3000DM001-1CH166_ZZZZ4444      ONLINE       0     0     0  block size: 512B configured, 4096B native

EDIT: I’m seeing these errors in dmesg:

[19561.708059] sd 2:0:1:0: [sdb] tag#226 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_OK cmd_age=31s
[19561.708063] sd 2:0:1:0: [sdb] tag#226 Sense Key : Illegal Request [current]
[19561.708067] sd 2:0:1:0: [sdb] tag#226 Add. Sense: Unaligned write command
[19561.708070] sd 2:0:1:0: [sdb] tag#226 CDB: Write(16) 8a 00 00 00 00 00 68 f4 ff 40 00 00 00 53 00 00
[19561.708073] blk_update_request: I/O error, dev sdb, sector 1760886592 op 0x1:(WRITE) flags 0x700 phys_seg 83 prio class 0

EDIT: No errors from smartctl:

smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.15.0-88-generic] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     ST18000NM003D-3DL103
Serial Number:    YYYY1111
LU WWN Device Id: 5 000c50 0e717e3c1
Firmware Version: SN03
User Capacity:    18,000,207,937,536 bytes [18.0 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-4 (minor revision not indicated)
SATA Version is:  SATA 3.3, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Sat Nov  4 10:04:47 2023 EDT
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82) Offline data collection activity
                                        was completed without error.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever
                                        been run.
Total time to complete Offline
data collection:                (  575) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine
recommended polling time:        (   1) minutes.
Extended self-test routine
recommended polling time:        (1759) minutes.
Conveyance self-test routine
recommended polling time:        (   2) minutes.
SCT capabilities:              (0x70bd) SCT Status supported.
                                        SCT Error Recovery Control supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   083   064   044    Pre-fail  Always       -       179393245
  3 Spin_Up_Time            0x0003   094   094   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       6
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   075   061   045    Pre-fail  Always       -       33945298
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       71
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       6
 18 Unknown_Attribute       0x000b   100   100   050    Pre-fail  Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   098   000    Old_age   Always       -       30065229831
190 Airflow_Temperature_Cel 0x0022   062   047   000    Old_age   Always       -       38 (Min/Max 36/38)
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       3
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       6
194 Temperature_Celsius     0x0022   038   053   000    Old_age   Always       -       38 (0 23 0 0 0)
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0023   100   100   001    Pre-fail  Always       -       0
240 Head_Flying_Hours       0x0000   100   100   000    Old_age   Offline      -       70 (122 77 0)
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       6940834183
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       35234995302

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Short offline       Completed without error       00%        63         -
# 2  Short offline       Completed without error       00%        40         -
# 3  Short offline       Completed without error       00%         0         -
# 4  Short offline       Interrupted (host reset)      00%         0         -
# 5  Short offline       Completed without error       00%         0         -

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

GitHub Actions ignores/overrides Docker container’s entrypoint

Problem: I’m trying to port a GitLab Pipeline to GitHub Actions, where we use Docker containers to provide the runtime environment. In GitLab, we simply use a line image: $DOCKER_TAG. The images are built by ourselves, which use a script as the entry point ENTRYPOINT ["/run.sh"]. The script sets up environment (e.g., by sourcing the setvars.sh script for the Intel compilers and calling ulimit -s unlimited, etc.) and calls exec "$@" at the end. For GitHub, I am using

container:
  image: ${{ matrix.DOCKER_TAG }}

However, the commands to be run later cannot find the needed binaries. Looking at the log, it appears that the container was created with --entrypoint "tail", causing the run.sh script to be ignored. I tried adding options: --entrypoint '/run.sh' in the Workflow YAML file, but it did not get reflected in how the container was created and the command still failed.

I may be missing something obvious, though I checked both the documentation and Google. Is there any way to use the entrypoint provided by the image without creating a Docker container action?

UPDATE Two more things I tried:

1) Specifying the /run.sh script as Custom shell: shell: '/run.sh {0}', but got an error

Error: Second path fragment must not be a drive or UNC name. (Parameter 'expression')

2) Using Docker container action or specifying a Docker image to use for a job step. But in both cases the Docker image has to be hard coded (or built fresh every time). Trying to use input arguments like

# Docker container action
image: docker://${{ inputs.docker_tag }}

or

# Job step
- uses: docker://${{ matrix.DOCKER_TAG }}
  with:
    args: ./.github/actions/build/build.sh

will both get an error

Unrecognized named-value: 'inputs'. Located at position 1 within expression: inputs.docker_tag

Solution: I’ve settled along the lines below. Not ideal/DRY, as the run.sh entrypoint script has to be duplicated from the Docker container and kept up to date. Also, the upload-artifact GitHub Actions does not preserve executable bits, so have to zip everything in a tar file.

jobs:
  build:
    container:
      image: XX/compiler:${{ matrix.DOCKER_TAG }}
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - run: ./.github/scripts/run.sh ./.github/scripts/build.sh
      - uses: actions/upload-artifact@v2
        with:
          name: build-artifact
          path: 'build-*.tar.bz2'
          retention-days: 7
    strategy:
      fail-fast: false
      matrix:
        DOCKER_TAG: [gcc, nvhpc, intel]
        include:
          - DOCKER_TAG: gcc
            FC: gfortran
          - DOCKER_TAG: nvhpc
            FC: nvfortran
          - DOCKER_TAG: intel
            FC: ifort

Setting up Windows

Installing WSL

  • Install Windows Subsystem for Linux and the VcXsrv X11 server.
  • Install the following programs
    • gnuplot - a plotting program
    • ImageMagick - a software suite to create, edit, compose, or convert bitmap images

Open a WSL terminal and run:

sudo apt update
sudo apt install gnuplot imagemagick

Installing common GUI tools

Setting up ssh-agent

  • Generate a key pair following the first bullet point under SSH-Agent , then skip to the last bullet point, and install weasel-pageant.
  • Open a WSL terminal and follow “On Ubuntu or Debian” in GPG-Agent.
  • Install the Windows version of GPG-agent following “On Windows”.
  • Add the lines under “Useful common settings”.

Setting up .ssh/config

Follow the steps in Customizing .ssh/config and set up Sharing sessions over a single connection and Host alias. Leave Multi-hop for later when you need it but do keep in mind of this option.

Python email package: how to reliably convert/decode multipart messages to str

Problem I was trying to process old, potentially non-compliant emails with Python. I could read in the message without problem:

In [1]: m=email.message_from_binary_file(open('/path/to/problematic:2,S',mode='rb'))

But subsequently converting it to string gave a UnicodeEncodeError: ‘gb2312’ codec can’t encode character ‘\ufffd’ in position 1238: illegal multibyte sequence. The (multi-)part of this problematic message has “Content-Type: text/plain; charset=”gb2312” and “Content-Transfer-Encoding: 8bit”.

In [2]: m.as_string()
---------------------------------------------------------------------------
UnicodeEncodeError                        Traceback (most recent call last)
<ipython-input-26-919a3a20e7d8> in <module>()
----> 1 m.as_string()

~/tools/conda/envs/conda3.6/lib/python3.6/email/message.py in as_string(self, unixfrom, maxheaderlen, policy)
    156                       maxheaderlen=maxheaderlen,
    157                       policy=policy)
--> 158         g.flatten(self, unixfrom=unixfrom)
    159         return fp.getvalue()
    160

~/tools/conda/envs/conda3.6/lib/python3.6/email/generator.py in flatten(self, msg, unixfrom, linesep)
    114                     ufrom = 'From nobody ' + time.ctime(time.time())
    115                 self.write(ufrom + self._NL)
--> 116             self._write(msg)
    117         finally:
    118             self.policy = old_gen_policy

~/tools/conda/envs/conda3.6/lib/python3.6/email/generator.py in _write(self, msg)
    179             self._munge_cte = None
    180             self._fp = sfp = self._new_buffer()
--> 181             self._dispatch(msg)
    182         finally:
    183             self._fp = oldfp

~/tools/conda/envs/conda3.6/lib/python3.6/email/generator.py in _dispatch(self, msg)
    212             if meth is None:
    213                 meth = self._writeBody
--> 214         meth(msg)
    215
    216     #

~/tools/conda/envs/conda3.6/lib/python3.6/email/generator.py in _handle_multipart(self, msg)
    270             s = self._new_buffer()
    271             g = self.clone(s)
--> 272             g.flatten(part, unixfrom=False, linesep=self._NL)
    273             msgtexts.append(s.getvalue())
    274         # BAW: What about boundaries that are wrapped in double-quotes?

~/tools/conda/envs/conda3.6/lib/python3.6/email/generator.py in flatten(self, msg, unixfrom, linesep)
    114                     ufrom = 'From nobody ' + time.ctime(time.time())
    115                 self.write(ufrom + self._NL)
--> 116             self._write(msg)
    117         finally:
    118             self.policy = old_gen_policy

~/tools/conda/envs/conda3.6/lib/python3.6/email/generator.py in _write(self, msg)
    179             self._munge_cte = None
    180             self._fp = sfp = self._new_buffer()
--> 181             self._dispatch(msg)
    182         finally:
    183             self._fp = oldfp

~/tools/conda/envs/conda3.6/lib/python3.6/email/generator.py in _dispatch(self, msg)
    212             if meth is None:
    213                 meth = self._writeBody
--> 214         meth(msg)
    215
    216     #

~/tools/conda/envs/conda3.6/lib/python3.6/email/generator.py in _handle_text(self, msg)
    241                 msg = deepcopy(msg)
    242                 del msg['content-transfer-encoding']
--> 243                 msg.set_payload(payload, charset)
    244                 payload = msg.get_payload()
    245                 self._munge_cte = (msg['content-transfer-encoding'],

~/tools/conda/envs/conda3.6/lib/python3.6/email/message.py in set_payload(self, payload, charset)
    313             if not isinstance(charset, Charset):
    314                 charset = Charset(charset)
--> 315             payload = payload.encode(charset.output_charset)
    316         if hasattr(payload, 'decode'):
    317             self._payload = payload.decode('ascii', 'surrogateescape')

UnicodeEncodeError: 'gb2312' codec can't encode character '\ufffd' in position 1238: illegal multibyte sequence

I’m not really familiar with the idiosyncrasies of email internals, and searching online for this type of errors turned up mostly problems while scraping the web, and basically suggested somewhat the obvious: the raw bytes read in contains Unicode characters that are not possible to encode with the target codec.

My question is: what’s the correct way to reliably handle (potentially non-compliant) emails?

EDIT It is interesting that m.get_payload(i=0).as_string() would trigger the same exception, but m.get_payload(i=0).get_payload(decode=False) gave a str that displayed correctly on my terminal, while m.get_payload(i=0).get_payload(decode=True) gave a bytes (b'\xd7\xaa...') that I can’t decode. However, the error happens on a different character:

----> 1 m.get_payload(i=0).get_payload(decode=True).decode('gb2312')
UnicodeDecodeError: 'gb2312' codec can't decode byte 0xac in position 1995: illegal multibyte sequence

or

----> 1 m.get_payload(i=0).get_payload(decode=True).decode('gb18030')
UnicodeDecodeError: 'gb18030' codec can't decode byte 0xa3 in position 2033: illegal multibyte sequence

Answer Apparently, if Content-Transfer-Encoding is 8bit, message.get_payload(decode=False) will still try to decode it to recover the original bytes. On the other hand, message.get_payload(decode=True) always produces bytes, although actual decoding happens only if Content-Transfer-Encoding exists and is quoted-printable or base64.

I ended up with the following code. Not sure if this is the correct way of handling emails.

body = []
if m.preamble is not None:
    body.extend(m.preamble.splitlines(keepends=True))

for part in m.walk():
    if part.is_multipart():
        continue

    ctype = part.get_content_type()
    cte = part.get_params(header='Content-Transfer-Encoding')
    if (ctype is not None and not ctype.startswith('text')) or \
       (cte is not None and cte[0][0].lower() == '8bit'):
        part_body = part.get_payload(decode=False)
    else:
        charset = part.get_content_charset()
        if charset is None or len(charset) == 0:
            charsets = ['ascii', 'utf-8']
        else:
            charsets = [charset]

        part_body = part.get_payload(decode=True)
        for enc in charsets:
            try:
                part_body = part_body.decode(enc)
                break
            except UnicodeDecodeError as ex:
                continue
            except LookupError as ex:
                continue
        else:
            part_body = part.get_payload(decode=False)

    body.extend(part_body.splitlines(keepends=True))

if m.epilogue is not None:
    body.extend(m.epilogue.splitlines(keepends=True))

Array ordering when wrapping Fortran for Python using SWIG and numpy.i

Unresolved SO Question: I have a Fortran subroutine similar to the following:

subroutine fsub(array, dim1, dim2) bind(c)
  use iso_c_binding, only: c_int, c_double
  integer(c_int), intent(in), value:: dim1, dim2
  real(c_double), intent(inout):: array(dim1, dim2)

  array(1, 1) = 1
  array(2, 1) = 2
  array(1, 2) = 100
end subroutine

If I wrap it using SWIG and numpy.i and the following typemaps:

%apply (double* INPLACE_FARRAY2, int DIM1, int DIM2) {(double* array, int dim1, int dim2)}

%inline %{
void fsub(double* array, int dim1, int dim2);
%}

Then I would have to allocate a ‘C’ order array to pass in:

In [1]: import numpy as np; \
        import fmod; \
        arrayF = np.empty((100, 100), dtype=np.float_, order='F'); \
        arrayC = np.empty((100, 100), dtype=np.float_)
In [2]: fmod.fsub(arrayF)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-2-fa80220ec8a8> in <module>()
----> 1 fmod.fsub(arrayF)

TypeError: Array must be contiguous.  A non-contiguous array was given

In [3]: fmod.fsub(arrayC)
In [4]: arrayC[0, 0]
Out[4]: 1.0
In [5]: arrayC[1, 0]
Out[5]: 2.0
In [6]: arrayC[0, 1]
Out[6]: 100.0

My questions are:

  1. Shouldn’t the Fortran statement array(2, 1) = 2 sets arrayC[0, 1] instead?

If I had INPLACE_ARRAY2 instead of INPLACE_FARRAY2 in the %apply directive, then indeed I would have arrayC[0,1] = 2.0 after the call.

What happens exactly?

  1. Why wasn’t arrayF allowed? If I use f2py, then arrayF must be used and arrayC isn’t allowed, which is intuitive.

Python 2 pdb: a statement behaves differently when run at the pdb prompt

Problem This question may turn out to be really stupid, but here it is. The following statement triggers an exception on a particular email message:

  File "/Users/me/tools/maildir-deduplicate/maildir_deduplicate/mail.py", line 104, in body_lines
_, _, body = self.message.as_string().partition("\n\n")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position 621: ordinal not in range(128)

If I run under PDB and manually test it at the prompt, no exception thrown and body correctly set.

> /Users/me/tools/maildir-deduplicate/maildir_deduplicate/mail.py(105)body_lines()
-> _, _, body = self.message.as_string().partition("\n\n")
(Pdb) _, _, body = self.message.as_string().partition("\n\n")

But if I hit next line, it still throws the exception:

(Pdb) n
UnicodeDecodeError: UnicodeD...ge(128)')
> /Users/me/tools/maildir-deduplicate/maildir_deduplicate/mail.py(105)body_lines()
-> _, _, body = self.message.as_string().partition("\n\n")

If I break the statement, the exception is thrown on the partition() call.

  File "/Users/me/tools/maildir-deduplicate/maildir_deduplicate/mail.py", line 106, in body_lines
body = self.message.as_string()
_, _, body = body.partition("\n\n")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position 621: ordinal not in range(128)

Same story running under pdb: exception will be thrown if I hit n, but not if I enter _, _, body = body.partition("\n\n") at the prompt.

Any ideas what might be causing this?

Solution by Mark Tolonen: I suspect you have a from __future__ import unicode_literals in your code:

Test code:

from __future__ import unicode_literals
body = b'abc\n\ndef\xd7ghi'
_,_,body = body.partition('\n\n')

When run directly (no pdb):

Traceback (most recent call last):
  File "C:\test.py", line 4, in <module>
    _,_,body = body.partition('\n\n')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xd7 in position 8: ordinal not in range(128)

When stepped through in pdb it gets the UnicodeDecode error:

> c:\test.py(2)<module>()
-> from __future__ import unicode_literals
(Pdb) n
> c:\test.py(3)<module>()
-> body = b'abc\n\ndef\xd7ghi'
(Pdb) n
> c:\test.py(4)<module>()
-> _,_,body = body.partition('\n\n')
(Pdb) n
UnicodeDecodeError: UnicodeD...ge(128)')      <<<<<<<<<<<<<<<<
> c:\test.py(4)<module>()
-> _,_,body = body.partition('\n\n')

When manually executing the line it works because pdb isn’t under the __future__ import, so '\n\n' is a byte string:

> c:\test.py(2)<module>()
-> from __future__ import unicode_literals
(Pdb) n
> c:\test.py(3)<module>()
-> body = b'abc\n\ndef\xd7ghi'
(Pdb) n
> c:\test.py(4)<module>()
-> _,_,body = body.partition('\n\n')
(Pdb) _,_,body = body.partition('\n\n')   <<<<<<<<<<<<< manual
(Pdb) body                                <<<<<<<<<<<<< worked!
'def\xd7ghi'
(Pdb) n
UnicodeDecodeError: UnicodeD...ge(128)')  <<<<<<<<<<<<< failed!
> c:\test.py(4)<module>()
-> _,_,body = body.partition('\n\n')

Windows Subsystem for Linux

Windows Subsystem for Linux (WSL) is a compatibility layer for running Linux binary executables (in ELF format) natively on Windows 10. This means you can download an executable compiled for Linux and run it unmodified under Windows. You can also use `apt install` to access the repertoire of all Ubuntu packages. You can also install a different distribution such as Fedora and openSUSE via Windows Store.

X11 and DBus

Since WSL does not currently support X11 and Unix sockets, which DBus uses by default, we need to do the following:

  • Install VcXsrv or Xming.
  • export DISPLAY=:0.0. You can add this to your shell init script:
echo "export DISPLAY=:0.0" >> ~/.bashrc
  • In /etc/dbus-1/session.conf, replace unix:tmpdir=/tmp with tcp:host=localhost,port=0. If the file or the line does not exist, simply add it.
  • Suppress a few other benign warnings:
echo "export NO_AT_BRIDGE=1" >> ~/.bashrc
sudo dbus-uuidgen > /etc/machine-id

Filter forwarded emails in Gmail

Answer to this SE question You can use the deliveredto: operator in the Has the words field when creating a filter, as indicated by mvime. However, not all email providers append Delivered-to to the email header, so a more reliable way is to forward you@oldemail.com to you+oldemail@gmail.com, and filter by deliveredto:(you+oldemail@gmail.com).

Emacs: Jump to the next line with same indentation

A little improvement on Stefan’s answer:

(defun jump-to-same-indent (direction)
  (interactive "P")
  (let ((start-indent (current-indentation)))
    (while
      (and (not (bobp))
           (zerop (forward-line (or direction 1)))
           (or (= (current-indentation) 0)
           (> (current-indentation) start-indent)))))
  (back-to-indentation))

This function takes a prefix argument (e.g., +1/-1) that designates the number of lines to move over when searching for a line with the same indentation. It also skips empty lines. Finally one can bind both forward and backward searches using keybindings similar to M-{ and M-} for paragraphs:

(global-set-key [?\C-{] #'(lambda () (interactive) (jump-to-same-indent -1)))
(global-set-key [?\C-}] 'jump-to-same-indent)

Shortcut to select 10+ windows in GNU Screen

Links to SU post 1, post 2, and SO.

According to screen’s manual, you can add the following lines to your ~/.screenrc file:

bind -c demo1 0 select 10
bind -c demo1 1 select 11
bind -c demo1 2 select 12
bindkey "^B" command -c demo1

makes C-b 0 select window 10, C-b 1 window 11, etc. Alternatively, you can use:

bind -c demo2 0 select 10
bind -c demo2 1 select 11
bind -c demo2 2 select 12
bind - command -c demo2

makes C-a - 0 select window 10, C-a - 1 window 11, etc.