Search This Blog

Monday, October 21, 2024

GRPC in Google Cloud

 

Recently, I've worked on setting up a GRPC endpoint behind the load balancer in the GCP (Google Cloud). That was a new project and in the first iteration we decided to serve it from a VM. The same machine was also the endpoint for the standard HTTP API. I set two load balancers to redirect traffic to each port. Both were healthy, but the GRPC didn't work properly. 

To my great surprise, I learnt that GRPC traffic requires not only HTTP2 protocol, but also a TLS/SSL setup on the server end of the internal connection (LB-VM). It has to be done, despite the fact traffic to be encrypted (with so-called automatic network-level encryption) [3]. The LB documentation kind of indicates the situation [1].  But initial, I could not believe, that Google, who introduce GRCP, could set it in such a messy way, and assumed that I didn't understand documentation properly. The fact that, AI didn't give a clear answer and hallucinated a few scenarios wasn't helpful. Only after a friend of a friend, who had faced the same issue before, confirmed that GCP cannot properly handle GRPC in LB, I found more documentation [2]

What really shocking is that the SSL/TLS certificate don't need to be valid at all. It can be an old, self sign. Doesn't matter. Just need to be there! It certainly does not need to be the certificate used by the LB

We automate VMs setup with Ansible [4], so I prepared a small script to set the SSL certificate. It's generic enough to be useful for others with minor adjustment. The code can be found at the end of the article. The "path variables" should be changed. Changing the DNS subject might also be not a bad idea, but probably it's going to work anyway, because the certificate does not need to be a valid one.

Summary

So to have GRPC behind load balancer in GCP you have to:

  • Prepare an External Application Load Balancer
  • Set HTTPS frontend (e.g. with certificate provided by Google)
  • Configure backend to be HTTP2 (with GRPC and HTTP2 health check)
  • Prepare an SSL certificate and ensure it's present at the end point 

Links

  1. https://cloud.google.com/load-balancing/docs/https
  2. https://cloud.google.com/load-balancing/docs/ssl-certificates/encryption-to-the-backends
  3. https://cloud.google.com/load-balancing/docs/https/http-load-balancing-best-practices
  4. https://ansible.readthedocs.io/

Ansible code

- name: Set  directory
  ansible.builtin.file:
    path: "{{ ivynet_backend_path_secrets }}"
    state: directory
    owner: root
    group: root
    mode: "0700"
  tags:
    - ssl

- name: Check if pem file exists
  ansible.builtin.stat:
    path: "{{ ivynet_backend_path_secrets }}/self.pem"
  register: pem

- block:
  - name: Create private key (RSA, 4096 bits)
    community.crypto.openssl_privatekey:
      path: "{{ ivynet_backend_path_secrets }}/self.key"
    tags:
      - ssl

  - name: Create certificate signing request (CSR) for self-signed certificate
    community.crypto.openssl_csr_pipe:
      privatekey_path: "{{ ivynet_backend_path_secrets }}/self.key"
      common_name: self.ivynet.dev
      organization_name: IvyNet
      subject_alt_name:
        - "DNS:grpc.test.ivynet.dev"
        - "DNS:self.test.ivynet.dev"
        - "DNS:test.ivynet.dev"
    register: csr
    tags:
      - ssl

  - name: Create self-signed certificate from CSR
    community.crypto.x509_certificate:
      path: "{{ ivynet_backend_path_secrets }}/self.pem"
      csr_content: "{{ csr.csr }}"
      privatekey_path: "{{ ivynet_backend_path_secrets }}/self.key"
      provider: selfsigned
    tags:
      - ssl
  when:
    - not pem.stat.exists
 



Tuesday, August 20, 2024

Pain with Names (of computing resources)

 

The good naming convention of computing resources (API, functions, ...) is an important aspect of code usability. Ideally, the names are short, but descriptive, follow some kind of conventions and allows people, who read or use code/product, intuitive usage, and easy documentation search. The Terraform (or OpenTofu) GCP (Google Cloud Platform) Provider is an example on how a small 'paper cuts' might make the experience painful.

Resources in GCP, at least some of them, can be global or regional. Each 'type' has the own API call (e.g. there is 'healthChecks', and 'regionHealthChecks'). Terraform follows the same convention. That's the first design decision to discuss. I, as the end user, would prefer GCP, or at least Terraform as a higher level language, to put them together in one resource type (e.g. healthCheck) and have an option inside in the resource to switch between them. However, Terraform covers a lot of providers, so I can guess (haven't searched the answer) there is a policy to translate underling API in the most direct way.

The real pain, is that there are no way to easy, general way to distinguish between regional and global resources. As mentioned above, default health check is global and regional has the 'region' prefix, but for the addresses the endpoint 'address' is regional and the global one has the prefix (globalAddress). Terraform follows. I don't know why API introduces the chaos, but I would much appreciate, if Terraform introduced an order. For example, everything what is regional, has a prefix region. If authors afraid of introducing problems by cross naming API endpoints and Terraform resources (e.g. a global_address become address and address: address_region), the solution could be to introduce a prefix for both types. Then we would have global_address and region_address.

Another problem is usage of prefixes rather than postfixes. As a user, I first look to create a load balancer and then decides if I want to make it global or regional. API and Terraform resources should make it easier for me, by exposing the more important resource feature first, on the left, e.g. address_global and address_region. Maybe global_address sounds better, but this is not a poetry and users don't read the code aloud. Using prefix makes searching and reading documentation harder, too. Methods and objects are usually listed in the alphabetic order, and with postfixes similar resource are next to each other, healthCheck is next to helthCheckRegion etc. It case of GCP would also make API usage a bit easier. The GCP API follows the Camel function name convention, and we have 'address', but 'globalAddress' ('a' vs 'A' in 'address'). With the postfix, it would be 'address' and 'addressGlobal'. The 'a' in 'address' would be always lowercase.

So what did go wrong for me with all of that? Recently, I've been working on a scenario to deploy a load balancer pointing onto a test VM. That took me too much time. The GCP documentation is rich. Quite a lot of APIs have Terraform resources examples, but not all of them. So I was patching them, with extra steps translating Console steps into Terraform using the GCP provider documentation. Some example were for regional resources, others for global one. The provider does not link between them. It took me too much time to figured out the right resource names. It was like this: "The error reads that the regional resource cannot point onto the global one. But why? I don't have any global resources. Oh, in this case, you have to add the word regional." I fixed them one by one. The last error was the address. On the Terraform provided docs page, I put "compute_address" in the search bar. And I read the help page over and over, looking on how to enforce the resource to be global. Finally, not sure how, maybe looking on one of the examples, I noticed that the address endpoint by default is regional, and I needed to add the 'global' prefix. Of course, in the search bar, I could type only 'address'. Unfortunately, for many resources keywords a search returns so many results, that they are not helpful, especially when you start the adventure with a new product.



Monday, April 22, 2024

Open geth (and other) binaries on MacOS

Recently, I had a problem opening the geth binaries (one of the Ethereum execution clients) on macOS.


 

After a short internet search, I found it was caused by extended attributes check. The problem like that occurs when the unsigned software is downloaded from the internet, but only using a web browser. When the curl or wget commands are used, there are no extended attributes assigned to the file.

 To check if the file has extra attributes, you can run a simple `ls -l` command and look for the `@` character.

> ls -l
-rwxr-xr-x@ 1 wawrzek  staff  45986920 17 Apr 07:06 geth
The list of attributes can be obtained with the `xattr` command:
> xattr geth
com.apple.quarantine
The same command can be used to remove the attribute
> xattr -d com.apple.quarantine geth
what enables an application to run.

Tuesday, April 09, 2024

Logitech MX Keys S on macOS and Linux (Crux)

 Before I forget again.

- On macOS, the UK keyboard is recognised as the ISO (International) one. That maybe causes the problem with the location of the ['`','~'] key. It's mixed up with the ['§','±'] one. I wrote maybe, because today I manually set it up to the ANSI option. That fixed the problem. But then, for a test, switched back to ISO, and it still works fine. Maybe my problem is because I connected the keyboard to a USB switch, rather than directly.

- Officially, Logitech supports Logi Option + software on Windows and Mac, but on Linux we have Solaar. It works, and if you are a Crux user, I created a port for it.

Sunday, February 04, 2024

Open file from command line (in Linux and Macos)

One of the nice feature of MacOS is the open command. It allows opening files directly from the command line without knowing the application linked to the file type. For example:

 open interesting.pdf 

opens the interesting.pdf file using whatever program is assigned to open PDF files. (If you want more example about the open command, you can check this link.)

For some time I wonder about a Linux equivalent. Recently decided to look for it more actively and check if AI might help. It did, and pointed at the gio command from the Gnome Input/Output library. After adding a function or an alias following block of code to the .zshrc, I have a Linux equivalent.

  •  alias

open="gio open"

  • function:

open () {

    gio open $1

}   

And finally, xdg-open is an alternative for the "gio open".

Saturday, December 30, 2023

Glow the Grip of MD files (from GitHub)

If you will every need to locally render an MD file, e.g. reading some documentation, you can use the glow [1] program. It renders a MD file in the terminal.

In the case of GitHub repository, an alternative is to use the grid [2] project. It sets a local webserver using the GitHub markdown API. It produces a local view of MD files as they would be in the GitHub website.

 

Links

  1. https://github.com/charmbracelet/glow
  2. https://github.com/joeyespo/grip

Sunday, November 12, 2023

Summary of a Terraform plan output

One of the most annoying thing when working with terraform is the size of output of the terraform plan command. For more complex environments, it easily can get to many thousand lines, even for what seems to be a small change.  It makes very hard to confirm that a code change does not have a side effects.

It would be nice to have the summary option, showing only resources and modules changed. I guess one day such feature will be added. In the meantime, I thought to use the grep command on the terraform plan output. It wasn't easy, because the output contain a few control character. After quite a few attempts, I found that following regex is a substitute.

terraform plan | grep -E "^[[:cntrl:]][[:print:]]+[[:space:]]+#\ "

Wednesday, May 17, 2023

How to find s3 bucket in multiple accounts (with awk and multiple field separator)

Imagine you have quite a few AWS accounts. In one of them, you don't know which one, there is an S3 bucket. The AWS CLI with awk and zsh can help to find it.

In the first step, let's prepare a list of all accounts, or rather profiles from the AWS CLI config (the ~/.aws/config file).

accounts=($(awk -F "( |])" '/profile sso/ {print $2}'  ~/.aws/config))

In the example, we limit the list only to profiles with the prefix "sso". The command uses awk to find any line with the string "profile sso" and print the second field from it. However, it does no use the standard field separator. There are 3 characters working as a separator: space, "|" and "]". Please also note the awk command is two pairs of "()".

 The list is saved into the accounts variable and used in the second command. It lists all s3 buckets from each account, and grep for the selected string, which of course can be the whole bucket name.

p=sso-prod 
bucket=my-company-not-so-important-bucket
for accounts ($accounts) {echo $account; aws s3 ls --profile $p| grep $bucket}

Tuesday, August 30, 2022

Network debugging in k8s

 

 Sometimes it is good to look at a network from the inside of a kubernetes cluster. Well-prepared images do not have any useful networking tools (because they are minimal image designed to do specific task). Therefore, you might need to run commands from a dedicated container. An example of such tool is the netshoot image. It can be found in GitHub:  https://github.com/nicolaka/netshoot

It's very easy to run: 
 

  kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot


Tuesday, May 31, 2022

Jenkins and Splunk (Cloud)

There is the Jenkins Plugin to set log export to Splunk. In Splunk Marketplace you can find the Splunk App for Jenkins, but the app works only for Splunk Enterprise, not the Cloud edition. Documentation describe how to connect Jenkins to Splunk. It includes values for cloud deployment as well as on-premises one. Thanks to that, the connection was easy to establish, but there were no logs from Jenkins. I had to manually created 4 indexes (I got them from this discussion) to get Jenkins logs visible in Splunk.

Monday, December 27, 2021

Sims 4 (and Origin client) on Linux with Steam

Recently, I've spent some time trying to make Sims 4 running on Linux. (That's the way you might spend time when you have a growing up daughter.) The ProtonDB entry was (and still is) GOLD, so it was encouraging. However, it didn't work on the first computer I tried. Steam client showed the game ran, but there was no window. Neither for the Sims 4 game, nor for the Origin client. It was:

  • Distro: Mint 20.2
  • CPU: Intel i5-7400
  • GPU: Nvidia (Zotec GT 610 1GB)
  • Kernel: 5.4.0-88 (Ubuntu/Mint)
  • Drivers: Nvidia closed source 390.144
  • Mesa: 21.0.3
  • Proton: various versions
  • Sims 4 works: No

Then I tried on my Crux machine, and it worked fine. I don't remember if I had to add PROTON_USE_WINED3D=1. The machine had:

  • Distro: CRUX 3.6.1
  • CPU: AMD Ryzen 5 2400G
  • GPU: AMD RX 6600 XT
  • Kernel: 5.13.2
  • Drivers: amdgpu
  • Mesa: 21.2.2
  • Proton: 6.3.7
  • Sims 4 works: Yes

So I thought that maybe GPU from the original  machine was too old, or NVidia drivers were causing problems. I replace the Nvidia GPU an AMD FirePro V5900 card. It didn't help.

The next step was to run a test on my old Dell XPS-13 laptop. I installed the OpenSuse Leap 15.3 on it. I played a bit with Proton version, but in the end Sims started with the Proton 7.0rc2-GE-1, the latest release from the Glorious Eggroll branch.

  • Distro: OpenSuse Leap 15.3
  • CPU: Intel i7-4510
  • GPU:Intel i915
  • Kernel: 5.13.18-59.10-default
  • Drivers: intel
  • Mesa: 20.2.4
  • Proton: 7.0rc2-GE-1
  • Sims 4 works: Yes

In the same time, I realized that I could swap CPUs between machines. The Ryzen had the build in Vega GPU core, which was not in use. The first phase was to check that Sims 4 could start on CRUX without discrete GPU. It worked fine. Then I swapped the CPUs (with motherboards). The CRUX (with Intel CPU and AMD GPU) worked fine, but the Mint with AMD Ryzen (and integrated AMD GPU) still struggled. I tried using the Proton from GE branch, which helped me on OpenSuse. No luck.

  • Distro: Mint 20.2
  • CPU: AMD Ryzen 5 2400G
  • GPU: AMD Radeon Vega 11
  • Kernel: 5.15.6-1-default
  • Drivers: amdgpu
  • Mesa:21.3.1
  • Proton: 6.3.8
  • Sims 4 works: No

I started to consider that something wrong is with Mint. The Sims 4 problem seems to be the problem with Origin client.  The first relatively easy change was to use the KDE instead of Cinnamon as a Window Manager for the Mint system. It didn't help. Next was to install OpenSuse along Mint. The only change compared to the Dell laptop was to use the Tumbleweed rather than the Leap edition.

  • Distro: OpenSuse Tumbleweed
  • CPU: AMD Ryzen 5 2400G
  • GPU: AMD Radeon Vega 11
  • Kernel: 5.15.6-1-default
  • Drivers: amdgpu
  • Mesa:21.3.1
  • Proton: 6.3.8
  • Sims 4 works: Yes

This time it worked fine. No issues. In conclusion, there is something wrong with Mint, but Sims 4 and Origin client works fine on Linux with Steam.

Thursday, November 04, 2021

dmidecode - the command I always forget about

 

From time to time I need to check some details of the hardware in one of my Linux server. There is a good command to do this, which I know exists and have good functionality, but cannot remember the actual text to call it. The command in question is:

dmidecode

And below links with examples and explanation how to use it:

  •  https://www.ubuntupit.com/simple-and-useful-dmidecode-commands-for-linux/
  •  https://linuxiac.com/dmidecode-get-system-hardware-information-on-linux/

Tuesday, July 20, 2021

How to check Jenkins credentails

If you every need to check what are the actual password in Jenkins credentials check this small groovy script  by Tim Jacomb. I helped me to confirm that credentials were corrupted during saving and restoring with Configuration as a Code plugin.

To use it (and any other groovy script). In your Jenkins:

  • go to the Manage Jenkins page, 
  • find the Script Console link in the Tools and Actions section, 
  • copy and paste the script into the text field 
  • run it

Saturday, May 01, 2021

ABCDE in Crux

After my OS update to Crux 3.6 (3.6.1 to be precise) I cleaned non-main (core, opt, xorg) packages. One of the side effects is that I lost abcde. It was removed from the contrib collection, because of inactive maintainer. Along abcde, the cd-discid was also deleted from the same reason. I decided to add them to my port collections (https://wawrzek.name/crux/repo/). I started from the old contrib ports. Looking at sources I noticed that there is a recent patch for cd-discid. I included it into my port. I also encounter problems in running abcde with my config. There were missing musicbrainz Perl modules, so I added ports for them as well.

Monday, March 01, 2021

Pulse and default sound card

For some reason PulseAudio wants to send sounds from my computer to the HDMI monitor, with rather crappy speakers, rather than to my headphones.

To stop it I set up a default output by editing /etc/pulse/default.pa. In my case the right configuration was:

set-default-sink alsa_output.pci-0000_38_00.6.analog-stereo

Saturday, November 28, 2020

Enforce module load in CRUX

At the moment, my Linux (CRUX 3.5) does not load a kernel module for the my mainboard monitoring chipset. The mainboard is GigaByte GA-M68MT-S2 and the chip is Nuvoton NCT6775F. I need data from that chip for Conky or LM Sensors. To ensure it's available I've modified the /etc/rc.modules file, by adding

modprobe nct6775

Friday, November 20, 2020

Debugging of Ansible and Molecule

As I descrbided in this article additional Ansible options are passed at the end of molecule command. This can be use to increase verbosity of Ansible:
molecule converge -- -vvvv
Molecule own debug information can be printed to the standard output with *--debug** option, but it has be specify without molecule command. For example:
molecule --debug converge
Of course, both can be mixed togther:
molecule --debug converge -- -vvvv

Sunday, November 01, 2020

edsn: Elite Dangerous Star Neighborhood

 Description

Virtual, for ever I was fascinating with maps of our star neighbourhood. Just like the one from European Southern Observatory website. Recently, I've enjoyed playing Elite Dangerous. I enjoy it even more, because it works perfectly fine on Linux (with Proton).  One of the great thing about Elite is freedom to roam between stars and visit, so many star neighbourhoods. 

The amount of stars is breathtaking. It's hard to visualize the ones which are close to the system you are in. So, I wrote a small Python script to get data from edsm and prepare them to be visualized with GNUplot.

Examples

Sol

Home, sweet home.

Stars in less than 15 Light Years from Sol

These are the commands to produce the SVG output after loading script and data into Gnuplot.

set term svg size 1600,1200
set view 45, 290, 1.25, 1.5
set output 'sol-r15.svg'; replot

Achenar

 I didn't know that the space around Achenar is so empty (OK. I don't have permit yet).

Stars in less than 15 Light Years from Achenar

These are the commands to produce the SVG output after loading script and data into Gnuplot.

set term svg size 1600,1200
set view 45, 275, 1.25, 1.5
set output 'achenar-r15.svg'; replot

Thursday, October 22, 2020

Kernel module info

A few commands to help in checking and adjusting kernel module options: 

  1. Display information about kernel module, including all options

    modinfo $KERNEL_MODULE

  2. Display information about current value for each option

    for i (/sys/module/$KERNEL_MODULE/parameters/*) {\
      echo $(basename $i);\
      cat $i\
    }


  3. Change kernel module option without restart (temporary)

    echo "$NEW_VALUE" > /sys/module/$KERNEL_MODULE/parameters/$OPTION

  4. Load module with an option set to value
    insmod $PATH_TO_MODULE/$KERNEL_MODULE.ko\ $OPTION1=$VALUE1 \
    $OPTION2=$VALUE2


  5. Ensure kernel module option is set during boot up with grub
    1. Open /etc/default/grub
    2. Add/edit following line
      GRUB_CMDLINE_LINUX='$KERNEL_MODULE.$OPTION=$VALUE'

Of course $KERNEL_MODULE, as well as all other strings with "$" at the beginning, has to be replaced by appropriate value.

In CRUX /etc/default/grub does not exist, so it has to be created.

 Useful links

Sunday, October 11, 2020

Shell script to setup system with new kernel

 

This is a small script helping in setting up a new kernel. It copies files from the Linux kernel source directory and builds the matching initramfs file.

 

#!/bin/sh
#
set -x
if [ $(dirname $PWD) != '/usr/src' ]
then
	echo ""
	echo "Please change directory to a one with Linux kernel sources"
	exit 1
fi

version=$(basename $PWD |awk -F\- '{print $2}')


ls boot/vmlinuz-${version}-* 2> /dev/null && \
	build=$(ls -1 /boot/vmlinuz-${version}-*| sort -n | awk -F\- '{print $3}') || \
	build=1

release=${version}-${build}
cp System.map /boot/System.map-${release}
cp arch/x86/boot/bzImage /boot/vmlinuz-${release}
mkinitrd /boot/initramfs-${release}.img ${version}

grub-mkconfig > /boot/grub/grub.cfg