InEnglish

Max stack usage for C program

Wednesday, 10 April 2019
|
Écrit par
Grégory Soutadé

Another day, another script. This one helps to compute the maximum stack usage of a C program. In facts, it combines the output of cflow and GCC GNAT to find the heaviest path used (which is not necessary the deepest). The first one compute target software call graph while option -fstack-usage of GCC creates .su files containing stack usage of all functions.

Targets software are simple embedded software. This script is a simple base not intended to run on all cases, handle dynamic stack nor recursive functions (if you wish to add it...).

A file version is available here.

#!/usr/bin/env python

import os
import re
import argparse

class SUInfo:
    def __init__(self, filename, line, func_name, stack_size):
        self.filename = filename
        self.line = line
        self.func_name = func_name
        self.stack_size = stack_size

    def __str__(self):
        s = '%s() <%s:%s> %d' % (self.func_name, self.filename, self.line, self.stack_size)
        return s

class FlowElement:
    def __init__(self, root, depth, stack_size, suinfo):
        self.root = root
        self.depth = depth
        self.stack_size = stack_size
        self.suinfo = suinfo
        self.childs = []

    def append(self, suinfo):
        self.childs.append(suinfo)

    def __str__(self):
        spaces = '    ' * self.depth
        su = self.suinfo
        res = '%s-> %s() %d <%s:%d>' % (spaces, su.func_name, su.stack_size,
                                        su.filename, su.line)
        return res

def display_max_path(element):
    print('Max stack size %d' % (element.stack_size))
    print('Max path :')
    res = ''
    while element:
        res = str(element) + '\n' + res
        element = element.root
    print(res)

cflow_re = re.compile(r'([ ]*).*\(\) \<.* at (.*)\>[:]?')

def parse_cflow_file(path, su_dict):
    root = None
    cur_root = None
    current = None
    cur_depth = 0
    max_stack_size = 0
    max_path = None
    with open(path) as f:
        while True:
            line = f.readline()
            if not line: break
            match = cflow_re.match(line)
            if not match: continue

            spaces = match.group(1)
            # Convert tab into 4 spaces
            spaces = spaces.replace('\t', '    ')
            depth = len(spaces)/4
            filename = match.group(2)
            (filename, line) = filename.split(':')
            filename = '%s:%s' % (os.path.basename(filename), line)

            suinfo = su_dict.get(filename, None)
            # Some functions may have been inlined
            if not suinfo:
                # print('WARNING: Key %s not found in su dict"' % (filename))
                continue

            if not root:
                root = FlowElement(None, 0, suinfo.stack_size, suinfo)
                cur_root = root
                current = root
                max_path = root
                max_stack_size = suinfo.stack_size
            else:
                # Go back
                if depth < cur_depth:
                    while cur_root.depth > (depth-1):
                        cur_root = cur_root.root
                # Go depth
                elif depth > cur_depth:
                    cur_root = current
                cur_depth = depth
                stack_size = cur_root.stack_size + suinfo.stack_size
                element = FlowElement(cur_root, cur_depth,
                                      stack_size,
                                      suinfo)
                current = element
                if stack_size > max_stack_size:
                    max_stack_size = stack_size
                    max_path = current
                cur_root.append(element)
    display_max_path(max_path)

su_re = re.compile(r'(.*)\t([0-9]+)\t(.*)')

def parse_su_files(path, su_dict):
    for root, dirs, files in os.walk(path):
        for sufile in files:
            if sufile[-2:] != 'su': continue
            with open(os.path.join(path, sufile)) as f:
                while True:
                    line = f.readline()
                    if not line: break
                    match = su_re.match(line)
                    if not match:
                        # print('WARNING no match for "%s"' % (line))
                        continue
                    infos = match.group(1)
                    (filename, line, size, function) = infos.split(':')
                    stack_size = int(match.group(2))
                    key = '%s:%s' % (filename, line)
                    su_info = SUInfo(filename, int(line), function, stack_size)
                    su_dict[key] = su_info


if __name__ == '__main__':
    optparser = argparse.ArgumentParser(description='Max static stack size computer')
    optparser.add_argument('-f', '--cflow-file', dest='cflow_file',
                           help='cflow generated file')
    optparser.add_argument('-d', '--su-dir', dest='su_dir',
                           default='.',
                           help='Directory where GNAT .su files are generated')
    options = optparser.parse_args()

    su_dict = {}

    parse_su_files(options.su_dir, su_dict)
    parse_cflow_file(options.cflow_file, su_dict)

Usage & example

Let's take this simple software as example.

First, compile your software using -fstack-usage options in CFLAGS. It will creates an .su file for each object file. Then, launch cflow on your software. Finally, call my script.

mkdir test
cd test
gcc -fstack-usage gget.c -lpthread -lcurl
cflow gget.c > cflow.res
./cflow.py -f cflow.res

Result:

Max stack size 608
Max path :
-> main() 352 <gget.c:493>
    -> do_transfert() 160 <gget.c:228>
        -> progress_cb() 96 <gget.c:214>

Let's encrypt certificate renewal with Gandi LiveDNS API

Tuesday, 02 April 2019
|
Écrit par
Grégory Soutadé

It's now one year I use Let's Encrypt TLS wildcard certificates. Until now, all was fine, but since the beginning of 2019, there is two domains on my certificate : soutade.fr and *.soutade.fr and (maybe due to my certificate generation) I need to perform two challenges for renewal : HTTP (http01) and DNS (dns01).

So, I wrote a Python script that performs both :

#!/usr/bin/env python3
#-*- encoding: utf-8 -*-

# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program.  If not, see <http://www.gnu.org/licenses/>.

#
# Handle certificate renewal using HTTP and DNS challenges
# DNS challenge performed by Gandi Live v5 API
#

import requests
import os
import argparse
import shutil

# Config
API_KEY = "YOUR-KEY"
LIVEDNS_API = "https://dns.api.gandi.net/api/v5/"
ACME_RECORD = '_acme-challenge'
ACME_CHALLENGE_PATH = '/var/www/.well-known/acme-challenge'

headers = {
    'X-Api-Key': API_KEY,
}

CERTBOT_TOKEN = os.environ.get('CERTBOT_TOKEN', None)
CERTBOT_VALIDATION = os.environ.get('CERTBOT_VALIDATION', None)
DOMAIN = os.environ.get('CERTBOT_DOMAIN', None)

optparser = argparse.ArgumentParser(description='Letsencrypt challenge for Gandi v5 API')
optparser.add_argument('-c', '--cleanup', dest='cleanup',
                       action="store_true", default=False,
                       help='Cleanup chanllenge')

options = optparser.parse_args()     

if options.cleanup:
    print('Cleanup')
    if os.path.exists(ACME_CHALLENGE_PATH):
        shutil.rmtree(ACME_CHALLENGE_PATH)
else:
    if CERTBOT_TOKEN and CERTBOT_VALIDATION:
        print('Build HTTP authentication')
        # Create token file for web server
        if not os.path.exists(ACME_CHALLENGE_PATH):
            os.makedirs(ACME_CHALLENGE_PATH)
        token_path = os.path.join(ACME_CHALLENGE_PATH, CERTBOT_TOKEN)

        with open(token_path, 'w') as token:
            token.write(CERTBOT_VALIDATION)
        exit(0)

response = requests.get(LIVEDNS_API + "zones", headers=headers)

target_zone = None
if (response.ok):
    zones = response.json()
    for zone in zones:
        if zone['name'] == DOMAIN:
            target_zone = zone
            break
else:
    response.raise_for_status()
    exit(1)

if not target_zone:
    print('Any zone found for domain %s' % (DOMAIN))
    exit(1)

domain_records_href = target_zone['zone_records_href']

# Get TXT record
response = requests.get(domain_records_href + "/" + ACME_RECORD, headers=headers)

# Delete record if it exists
if (response.ok):
    requests.delete(domain_records_href + "/" + ACME_RECORD, headers=headers)

if options.cleanup:
    exit(0)

print('Build DNS authentication')
record = {
    "rrset_name": ACME_RECORD,
    "rrset_type": "TXT",
    "rrset_ttl": 300,
    "rrset_values": [CERTBOT_VALIDATION],
    }

response = requests.post(domain_records_href,
                         headers=headers, json=record)

if (response.ok):
    print("DNS token created")
else:
    print("Something went wrong")
    response.raise_for_status()
    exit(1)

A downloadable version is available here

Crontab

In /etc/crontab :

0  1   1 * *   root   certbot renew  --manual -n --manual-public-ip-logging-ok --manual-auth-hook /root/gandi_letsencrypt.py --manual-cleanup-hook /root/letsencrypt_token_cleanup.sh

Aditionnals Scripts

Where /root/letsencrypt_token_cleanup.sh is

#!/bin/bash

/root/gandi_letsencrypt.py --cleanup

And in /etc/letsencrypt/renewal-hooks/post/ :

#!/bin/bash

service nginx restart

Errors

If you get a 404 error with nginx, you may add this line to ensure it will not delegate treatment in other part (or send it to another webserver) :

        location /.well-known/acme-challenge/ {
        }

Git: keep your commits in top of a branch

Thursday, 14 February 2019
|
Écrit par
Grégory Soutadé

Today we'll play a bit with Git. At work, we make some products that uses customized Linux kernel. Once deployed, this kernel is not often updated, so we chose to be based on LTS (Long Term Support) kernels. This gives us staibility and not so many rebase to do. Unfortunately, kernel gets security patches that we must include into our development.

But, to keep clear history, we want to have all our commits in top of the vanilla branch. Plus, having this schema helps to extract all customs patches for Yocto or other build system.

For our case, history needs to be rewrote in a non trivial way.

We currently work with version v4.14.59, but nowaday, kernel.org has submitted revision v4.14.98. Lets says that we have made the following commits

6f0b0d94b3e2250551fac6ba58b5ec7a02714174 --> 0790c6bd39a86b3964d022746fc85ae2eefb824d

after tag v4.14.59. So, we have something like this :

Current git state

In our remotes we have :

  • upstream --> points to kernel.org
  • origin --> internal copy of kernel.org

Our branches are :

  • linux-4.14.y -> upstream/linux-4.14.y (LTS branch)
  • linux-4.14.y-custom -> origin/linux-4.14.y-custom

First, we need to update LTS branch

    git checkout linux-4.14.y
    git pull upstream linux-4.14.y
    git fetch --tags upstream

The trick here is to put the HEAD of our custom branch at the last tag without deleting our commits. So, we need to make a copy of this one.

    git checkout linux-4.14.y-custom
    git checkout -b linux-4.14.59-custom linux-4.14.y-custom

Then, cut the the HEAD and integrate vanilla work.

    git reset --hard v4.14.59
    git rebase linux-4.14.y

Finally, integrate back our commits.

    git cherry-pick 6f0b0d94b3e2250551fac6ba58b5ec7a02714174 .. 0790c6bd39a86b3964d022746fc85ae2eefb824d

The work is almost finished, we still need tu update internal tags we made ! Unlike subversion, a tag in git is just a reference to a specific commit, so it's easy to manage and update. Even if it's a shared repository, we can change them because people that uses them are focused on our custom commits and not on the ones in vanilla branch. Here is a script that get all custom tags references and apply them to the cherry picked commits. An other strategy could be to postfix tags with the new kernel revision. It's up to you to decide what better fit your needs.

The script assume all our custom tags starts with "customXXX".

#!/bin/bash

TAGS_PREFIX="custom"
OLD_START="v4.14.59"
OLD_END="0790c6bd39a86b3964d022746fc85ae2eefb824d"
NEW_START="v4.14.98"

nb_commits=`git log --pretty=oneline $OLD_START..$OLD_END|wc -l`

for tag in `git tag -l $TAGS_PREFIX`; do
    cur_commits=`git log --pretty=oneline $OLD_START..$tag|wc -l`
    new_commit=`git log --pretty="format:%H" -n1 --skip=$(($nb_commits - $cur_commits)) $NEW_START..HEAD`
    # git log --pretty=oneline -n1 $new_commit
    git tag -d $tag
    git tag $tag $new_commit
done

Last thing to do, is to sync with remote. We need to pull from origin because HEAD had a strange behavior :

    git pull origin linux-4.14.y-custom
    git push origin linux-4.14.y-custom
    git push origin linux-4.14.59-custom # Optional
    git push --force --tags origin

Force for pushing tags is not needed if the tags were not modified, but just created. Now, we can delete our copy branch or keep it into git. Don't delete it, if you want to keep your old tags references !

Final result :

Final result

Debian : Failed to find logical volume at boot

Thursday, 17 January 2019
|
Écrit par
Grégory Soutadé

After further investigations, I found a correct fix to this. In facts, its my configuration that is wrong. I have LUKS on LVM schema like this :

system_group (LVM)
    --> system_crypt (LUKS)
swap_group (LVM)
    --> swap_crypt (LUKS)
home_group (LVM)
    --> home_crypt (LUKS)

To be right handled, you need to declare :

In /etc/crypttab :

home--group-home_crypt UUID=349ca075-2922-4c9c-a52b-8dce587767ea /root/home.key luks
swap--group-swap_crypt UUID=4490ce3c-8700-4e90-81df-250cd3573b7c /root/swap.key luks
system--group-system_crypt UUID=95e39100-25c2-41be-829a-bd84fcb21d0a none luks

use blkid command to get right UUID values

In /etc/fstab :

/dev/mapper/system--group-system_crypt /           ext4    errors=remount-ro 0       1
UUID=6866a661-0424-472c-853e-6daa20d15d74 /boot    ext4    defaults          0       2
/dev/mapper/home--group-home_crypt /home           ext4    defaults          0       2
/dev/mapper/swap--group-swap_crypt none            swap    sw                0       0

In /etc/initramfs-tools/conf.d/resume :

UUID=none

Then, do sudo update_initramfs -u and restart.


It's the second time it happens ! After an update my Debian refuse to boot. This time, I wasn't asked for anything !!

I have an LUKS on LVM configuration. I don't use UUID in my cryptroot and Debian scripts only activate devices with it ! I need to manually add "vgchange" command to mount all devices. Unfortunately, the patched script was overwrote by update.

If it happens, follow this procedure :

  • At boot, wait for the rescue shell (~5 minutes)
  • Enter "vgchange -ay"
  • Enter "exit"

Now, you may be able to boot into your system. Then :

  • Edit /usr/share/initramfs-tools/scripts/local-top/cryptroot
  • Add "vgchange -ay" in wait_for_source() function
  • Update initramfs with "sudo update-initrmafs -u"

Safely reboot !

Tip: keyctl in a bash script

Monday, 06 August 2018
|
Écrit par
Grégory Soutadé

Here is a simple tip to use keyctl in a bash script. keyctl is a wrapper for Linux kernel key management interface. It allows to securely save data in kernel memory. The man documentation is very bigcomplete but I didn't find any example on internet. What I initially wanted to do is to safely store a password entered by user inside a bash shell script and keep private to it (don't share with other processes).

Basically the script looks like :

#!/bin/bash

password=SecretPassword

keyctl new_session > /dev/null
keyid=`keyctl add user mail $password @s`
keyctl show
# echo "KEYID $keyid"
keyctl print $keyid

The first thing to do is to create a new session (to detach the current shared one).

Then we will add the password in the new item "mail". We don't have other choice to set type to "user". The item will be placed into the session keyring (@s). We could create new keyrings to store it with keyctl newring command. The command return item id as a big integer. We can use this integer or its name "%user:mail" for further references.

There is also a command keyctl padd which read data from stdin, but I don't recommend to use it as data is displayed in clear on the terminal.

Finally we show keyring information and print our password. We use print command to have an human friendly output, keyctl read command display it in hex format...