Search This Blog

Monday, October 21, 2024

GRPC in Google Cloud

 

Recently, I've worked on setting up a GRPC endpoint behind the load balancer in the GCP (Google Cloud). That was a new project and in the first iteration we decided to serve it from a VM. The same machine was also the endpoint for the standard HTTP API. I set two load balancers to redirect traffic to each port. Both were healthy, but the GRPC didn't work properly. 

To my great surprise, I learnt that GRPC traffic requires not only HTTP2 protocol, but also a TLS/SSL setup on the server end of the internal connection (LB-VM). It has to be done, despite the fact traffic to be encrypted (with so-called automatic network-level encryption) [3]. The LB documentation kind of indicates the situation [1].  But initial, I could not believe, that Google, who introduce GRCP, could set it in such a messy way, and assumed that I didn't understand documentation properly. The fact that, AI didn't give a clear answer and hallucinated a few scenarios wasn't helpful. Only after a friend of a friend, who had faced the same issue before, confirmed that GCP cannot properly handle GRPC in LB, I found more documentation [2]

What really shocking is that the SSL/TLS certificate don't need to be valid at all. It can be an old, self sign. Doesn't matter. Just need to be there! It certainly does not need to be the certificate used by the LB

We automate VMs setup with Ansible [4], so I prepared a small script to set the SSL certificate. It's generic enough to be useful for others with minor adjustment. The code can be found at the end of the article. The "path variables" should be changed. Changing the DNS subject might also be not a bad idea, but probably it's going to work anyway, because the certificate does not need to be a valid one.

Summary

So to have GRPC behind load balancer in GCP you have to:

  • Prepare an External Application Load Balancer
  • Set HTTPS frontend (e.g. with certificate provided by Google)
  • Configure backend to be HTTP2 (with GRPC and HTTP2 health check)
  • Prepare an SSL certificate and ensure it's present at the end point 

Links

  1. https://cloud.google.com/load-balancing/docs/https
  2. https://cloud.google.com/load-balancing/docs/ssl-certificates/encryption-to-the-backends
  3. https://cloud.google.com/load-balancing/docs/https/http-load-balancing-best-practices
  4. https://ansible.readthedocs.io/

Ansible code

- name: Set  directory
  ansible.builtin.file:
    path: "{{ ivynet_backend_path_secrets }}"
    state: directory
    owner: root
    group: root
    mode: "0700"
  tags:
    - ssl

- name: Check if pem file exists
  ansible.builtin.stat:
    path: "{{ ivynet_backend_path_secrets }}/self.pem"
  register: pem

- block:
  - name: Create private key (RSA, 4096 bits)
    community.crypto.openssl_privatekey:
      path: "{{ ivynet_backend_path_secrets }}/self.key"
    tags:
      - ssl

  - name: Create certificate signing request (CSR) for self-signed certificate
    community.crypto.openssl_csr_pipe:
      privatekey_path: "{{ ivynet_backend_path_secrets }}/self.key"
      common_name: self.ivynet.dev
      organization_name: IvyNet
      subject_alt_name:
        - "DNS:grpc.test.ivynet.dev"
        - "DNS:self.test.ivynet.dev"
        - "DNS:test.ivynet.dev"
    register: csr
    tags:
      - ssl

  - name: Create self-signed certificate from CSR
    community.crypto.x509_certificate:
      path: "{{ ivynet_backend_path_secrets }}/self.pem"
      csr_content: "{{ csr.csr }}"
      privatekey_path: "{{ ivynet_backend_path_secrets }}/self.key"
      provider: selfsigned
    tags:
      - ssl
  when:
    - not pem.stat.exists
 



Tuesday, August 20, 2024

Pain with Names (of computing resources)

 

The good naming convention of computing resources (API, functions, ...) is an important aspect of code usability. Ideally, the names are short, but descriptive, follow some kind of conventions and allows people, who read or use code/product, intuitive usage, and easy documentation search. The Terraform (or OpenTofu) GCP (Google Cloud Platform) Provider is an example on how a small 'paper cuts' might make the experience painful.

Resources in GCP, at least some of them, can be global or regional. Each 'type' has the own API call (e.g. there is 'healthChecks', and 'regionHealthChecks'). Terraform follows the same convention. That's the first design decision to discuss. I, as the end user, would prefer GCP, or at least Terraform as a higher level language, to put them together in one resource type (e.g. healthCheck) and have an option inside in the resource to switch between them. However, Terraform covers a lot of providers, so I can guess (haven't searched the answer) there is a policy to translate underling API in the most direct way.

The real pain, is that there are no way to easy, general way to distinguish between regional and global resources. As mentioned above, default health check is global and regional has the 'region' prefix, but for the addresses the endpoint 'address' is regional and the global one has the prefix (globalAddress). Terraform follows. I don't know why API introduces the chaos, but I would much appreciate, if Terraform introduced an order. For example, everything what is regional, has a prefix region. If authors afraid of introducing problems by cross naming API endpoints and Terraform resources (e.g. a global_address become address and address: address_region), the solution could be to introduce a prefix for both types. Then we would have global_address and region_address.

Another problem is usage of prefixes rather than postfixes. As a user, I first look to create a load balancer and then decides if I want to make it global or regional. API and Terraform resources should make it easier for me, by exposing the more important resource feature first, on the left, e.g. address_global and address_region. Maybe global_address sounds better, but this is not a poetry and users don't read the code aloud. Using prefix makes searching and reading documentation harder, too. Methods and objects are usually listed in the alphabetic order, and with postfixes similar resource are next to each other, healthCheck is next to helthCheckRegion etc. It case of GCP would also make API usage a bit easier. The GCP API follows the Camel function name convention, and we have 'address', but 'globalAddress' ('a' vs 'A' in 'address'). With the postfix, it would be 'address' and 'addressGlobal'. The 'a' in 'address' would be always lowercase.

So what did go wrong for me with all of that? Recently, I've been working on a scenario to deploy a load balancer pointing onto a test VM. That took me too much time. The GCP documentation is rich. Quite a lot of APIs have Terraform resources examples, but not all of them. So I was patching them, with extra steps translating Console steps into Terraform using the GCP provider documentation. Some example were for regional resources, others for global one. The provider does not link between them. It took me too much time to figured out the right resource names. It was like this: "The error reads that the regional resource cannot point onto the global one. But why? I don't have any global resources. Oh, in this case, you have to add the word regional." I fixed them one by one. The last error was the address. On the Terraform provided docs page, I put "compute_address" in the search bar. And I read the help page over and over, looking on how to enforce the resource to be global. Finally, not sure how, maybe looking on one of the examples, I noticed that the address endpoint by default is regional, and I needed to add the 'global' prefix. Of course, in the search bar, I could type only 'address'. Unfortunately, for many resources keywords a search returns so many results, that they are not helpful, especially when you start the adventure with a new product.



Monday, April 22, 2024

Open geth (and other) binaries on MacOS

Recently, I had a problem opening the geth binaries (one of the Ethereum execution clients) on macOS.


 

After a short internet search, I found it was caused by extended attributes check. The problem like that occurs when the unsigned software is downloaded from the internet, but only using a web browser. When the curl or wget commands are used, there are no extended attributes assigned to the file.

 To check if the file has extra attributes, you can run a simple `ls -l` command and look for the `@` character.

> ls -l
-rwxr-xr-x@ 1 wawrzek  staff  45986920 17 Apr 07:06 geth
The list of attributes can be obtained with the `xattr` command:
> xattr geth
com.apple.quarantine
The same command can be used to remove the attribute
> xattr -d com.apple.quarantine geth
what enables an application to run.

Tuesday, April 09, 2024

Logitech MX Keys S on macOS and Linux (Crux)

 Before I forget again.

- On macOS, the UK keyboard is recognised as the ISO (International) one. That maybe causes the problem with the location of the ['`','~'] key. It's mixed up with the ['§','±'] one. I wrote maybe, because today I manually set it up to the ANSI option. That fixed the problem. But then, for a test, switched back to ISO, and it still works fine. Maybe my problem is because I connected the keyboard to a USB switch, rather than directly.

- Officially, Logitech supports Logi Option + software on Windows and Mac, but on Linux we have Solaar. It works, and if you are a Crux user, I created a port for it.

Sunday, February 04, 2024

Open file from command line (in Linux and Macos)

One of the nice feature of MacOS is the open command. It allows opening files directly from the command line without knowing the application linked to the file type. For example:

 open interesting.pdf 

opens the interesting.pdf file using whatever program is assigned to open PDF files. (If you want more example about the open command, you can check this link.)

For some time I wonder about a Linux equivalent. Recently decided to look for it more actively and check if AI might help. It did, and pointed at the gio command from the Gnome Input/Output library. After adding a function or an alias following block of code to the .zshrc, I have a Linux equivalent.

  •  alias

open="gio open"

  • function:

open () {

    gio open $1

}   

And finally, xdg-open is an alternative for the "gio open".

Saturday, December 30, 2023

Glow the Grip of MD files (from GitHub)

If you will every need to locally render an MD file, e.g. reading some documentation, you can use the glow [1] program. It renders a MD file in the terminal.

In the case of GitHub repository, an alternative is to use the grid [2] project. It sets a local webserver using the GitHub markdown API. It produces a local view of MD files as they would be in the GitHub website.

 

Links

  1. https://github.com/charmbracelet/glow
  2. https://github.com/joeyespo/grip

Sunday, November 12, 2023

Summary of a Terraform plan output

One of the most annoying thing when working with terraform is the size of output of the terraform plan command. For more complex environments, it easily can get to many thousand lines, even for what seems to be a small change.  It makes very hard to confirm that a code change does not have a side effects.

It would be nice to have the summary option, showing only resources and modules changed. I guess one day such feature will be added. In the meantime, I thought to use the grep command on the terraform plan output. It wasn't easy, because the output contain a few control character. After quite a few attempts, I found that following regex is a substitute.

terraform plan | grep -E "^[[:cntrl:]][[:print:]]+[[:space:]]+#\ "