Yvan Janssens

My personal ramblings from stomping out random bugs

Porting GnuCOBOL to IBM i

Yvan Janssens
September 1 2019 03:44:38 PM
I've successfully ported GnuCOBOL to IBM i, and it is now available in the repo.qseco.fr IBM i yum repository. It can now be installed using yum install gnucobol and a specfile is available in our specfiles repository.

The following terminal log demonstrates its use:

bash-4.4$ uname -a
OS400 FLINT 2 7 00XX000XXXXX Os
bash-4.4$ cat hello.cob
       PROGRAM-ID. Hello-world.
           DISPLAY "Hello, world!".
           STOP RUN.
bash-4.4$ cobc -x hello.cob
bash-4.4$ ./hello
Hello, world!
bash-4.4$ ls
hello  hello.cob

Porting notes

Compiling GnuCOBOL from source on IBM i is trivial and does not require any patching, however, when building it using a specfile and the rpmbuild build harness, compilation fails with the following error message:

. ../tests/atconfig && . ../tests/atlocal extras-CBL_OC_DUMP.so \
        && $COBC -m -Wall -O -o CBL_OC_DUMP.so CBL_OC_DUMP.cob
gcc.bin: error: unrecognized command line option '-R'; did you mean '-R'?

This happens because the rpmbuild build harness supplies rpath values to the configure script, and the resulting cobc internal compilation commands become invalid.

This can be resolved by appending --disable-rpath to the %configure statement when building a specfile.

IBM i "Hello World" in ILE C

Yvan Janssens
May 28 2019 01:01:35 PM
This tutorial will guide you through writing a simple Hello World application in ILE C and running it.
Difficulty Easy
Requirements Basic C knowledge
TN5250 Client
IBM i V5R4 or Higher
Compilers and dev tools installed on the i box.

Table of contents

1. Create Source Physical File

2. Write/enter code

3. Compile the code

4. Run the code

1. Create Source Physical Files

Source code files on IBM i are stored in special kinds of Physical Files. These files have source members, which would effectively be your individual source files. IBM i does not offer subdirectories in these so this is a flat directory structure.

To create a source physical file, you  use the
CRTSRCPF command. Use F4 after entering the command to open the prompt screen:

Image:IBM i "Hello World" in ILE C
For this tutorial, we are creating the source physical file "HELLOC" in the library "YVANJ" (which is the default library for my user profile).

2. Write / enter code

For the purpose of this tutorial we are going to use the built-in IDE "Program Development Manager". You can start this IDE using the 'STRPDM' command:

Image:IBM i "Hello World" in ILE C

We continue by selecting "3. Work with members" and respond with the data of the source physical file we just created:

Image:IBM i "Hello World" in ILE C

You will be greeted with the contents of your source physical file, which should be empty:

Image:IBM i "Hello World" in ILE C

Hit F6 on your keyboard (Create), and fill out the form as below:

Image:IBM i "Hello World" in ILE C

Hit Enter to confirm.

You will be greeted with SEU, the Source Entry Utility. Enter the following code snipped and make sure to not press enter, just change your cursor position with the arrow keys:

Image:IBM i "Hello World" in ILE C

After filling out the snippet, hit enter, and then F3.

Image:IBM i "Hello World" in ILE C

Keep these defaults, and press enter again. You have now created your first C code file on IBM i:

Image:IBM i "Hello World" in ILE C

3. Compiling the code

C code on IBM i is compiled using the
CRTBNDC command. Back out of the screen you are currently at by repeatedly hitting F3 until you're back on the main menu. Issue CRTBNDC and finish with F4 to open the auto-complete form as before:

Image:IBM i "Hello World" in ILE C

Program is the output binary, and the indented library below it is the library you want to store your executable. In my case, this is a binary with the name 'hello' in the 'yvanj' library. The next three parameters denote the source physical file and the source member to compile; we use the HelloC source (physical) file we created earlier and specify the library in which we created it. Then we specify the source member we created using the F6 command earlier. We finish with Enter.

After a short while, the following status line should become visible:

Image:IBM i "Hello World" in ILE C

4. Run the code

Running our application is fairly simple, you can call a program by issuing CALL LIBRARY/PROGRAM. In our case this would become CALL YVANJ/HELLO

This will result in the following output:

Image:IBM i "Hello World" in ILE C

Starting Domino 10 Community Edition using systemd on Ubuntu 18.04

Yvan Janssens
April 21 2019 05:24:49 PM
The included startup scripts with the installer do not work on Ubuntu 18.04, and some of the examples available are more complex than necessary. The following simple systemd stanza will do the job just fine:

Description=IBM Domino Server
After=syslog.target network.target

ExecStart=/bin/bash /opt/ibm/domino/bin/server


Copy this file to /etc/systemd/system, and issue a systemctl daemon-reload command. Then enable the service using systemctl enable domino and start Domino 10 with systemctl start domino. Domino should automatically start on boot now, and the server log can be viewed by issuing systemctl status domino.

Fixing "JRE libraries are missing or not compatible...." on Traveler installation

Yvan Janssens
March 26 2019 02:07:26 PM
On modern Ubuntu Linux (18.04 LTS in my case), the InstallAnywhere installer fails with the following error:

JRE libraries are missing or not compatible....

IBM provides the following article to troubleshoot this issue:


Sadly enough, the information and work around in this article is not entirely accurate. The reason why the installer fails is because it relies on an included JRE, which is 32-bit only (even though the applications are 64-bit). As a consequence, you need to have multilib enabled and installed. Refrain from carrying out the fix in the article since it will lead to stability issues on your server and won't fix the installation process.

The installation process can be fixed using:

$ sudo dpkg --add-architecture i386
$ sudo apt-get update
$ sudo apt-get install zlib1g:i386

This should allow you to continue the installation. This fix will work on most InstallAnywhere self-extracting installers from that time frame.

Demystifying AS/400 DASD

Yvan Janssens
March 11 2019 07:11:11 AM

There seem to be quite a few misconceptions about AS/400 hard drives, commonly referred to as DASD (directly attached storage device). It is commonly known that only IBM-supplied DASD work, and typically only the FRUs listed. The big question though is "why is this the case?".

Spinning disks die eventually. At the moment, generic SCSI disks are fairly commonly available, however, the FRUs to keep old AS/400 alive for hobbyist and archival purposes are becoming more and more scarce, with the supply eventually drying up.

Distinction between models
Some of the information here is model-dependant, so a distinction is made between the following systems:

CISC/IMPI-based AS/400s
These are the old pre-PowerPC-based AS/400s. They will typically run OS revisions up and until V3R2. They operate on a complex interaction of microcode on top of a more generic architecture and several subsystems, not unlike the channel architecture on it's larger brethren. A more thorough description on how these machines work is a future project. Example model numbers are 9401-P03 or 9401-P02

Early PowerPC-based AS/400
PowerPC-based AS/400s were gradually introduced to replace the earlier CISC/IMPI-based models. Typically, the high end models were replaced first. PowerPC brought a massive speed boost to the AS/400 product line, however, it's real potential was only realised later. Early OS revisions (e.g. V3R6) were direct ports of the CISC/IMPI-based code and were quite slow. A thorough rewrite of the kernel in V4 addressed most of these issues. Early PPC-based AS/400s would typically have been able to run V3R6 or later. Example models are e.g. the 9403-53X. I am still not convinced this is due to OS revision or hardware revision.

PowerPC-based AS/400
These are the models typically supplied with OS/400 V4 or later, all the way up until the Power5-based models (P5-based models are a special case though, but this is not relevant for DASD). Example models are 9406-170 or 9406-250

520, 522 and 512 bytes per sector
It is currently commonly known that the AS/400 uses an odd sector size. However, the information that can be found online is quite contradictory - some sources quote 520, some 522. Some sources quote that 522 is only used in systems with a RAID controller. Based on my observations and tests this is not true - the difference does not lie in the use of a RAID controller or not, but in the type of AS/400. PowerPC-based AS/400s will typically have 522 bytes per sector, and CISC/IMPI-based AS/400s will have 520 bytes per sector. There are a few edge cases though, especially in early PowerPC-based AS/400s - if the DASD have been migrated from a CISC/IMPI-based machine they would not typically be re-formatted on those early machines on the OS/revisions at the time.

So, TL;DR:
  • Standard SCSI disks: 512 bytes per sector
  • CISC/IMPI-based AS/400: 520 bytes per sector
  • Early PowerPC-based AS/400: 520 or 522 bytes per sector
  • PowerPC-based AS/400s: 522 bytes per sector.

Custom VPD

This would be obvious to anyone with any idea on how SCSI disks work, but the vital product data on AS/400 DASD contains entries verified by the firmware. This VPD typically contains serial number, disk type (e.g. 6713) and serial number. When installing SLIC on a PowerPC-based AS/400, this is verified to contain the expected data. Another quirk to note is that the OS does not (always) rely on SCSI READ CAPACITY commands to identify the size of the DASD. As a result, putting the VPD of a 6714 (~18GB) over a 70GB volume does not result in the drive being detected as a 70GB drive.

Magic commands

Another common misconceptions is that IBM uses proprietary commands to interact with the drive. This is actually not entirely true. There are however a few implementation based quirks that the OS relies on when dealing with drives.

SCSI FORMAT implementation

The SCSI FORMAT implementation needs to be accurate. The OS relies on the disks being formatted using a SCSI FORMAT command. OS/400 technically doesn't have a file system (more about that will be documented in another write-up), and the way it deals with data storage requires hard drives to be completely empty upon first use. If the SCSI FORMAT implementation doesn't fill the drive with the specified pattern (not all drives implement this properly), the OS installer will crash/hang because it tries to read a few sectors to verify if formatting succeeded (it does not just trust the status code returned by the SCSI FORMAT command).


To optimise disk read and writes, OS/400 heavily relies on SKIP READ and SKIP WRITE. These commands need to be implemented and need to be implemented properly. More information about this can be found on http://ps-2.kev009.com/rs6000/manuals/SAN/ESS/2105_Model_ExxFxx/ESS_SCSI_Command_Reference_ExxFxx_SC26-7297-01.PDF. The reason why OS/400 relies on this is another story, closely related to the way the file system (or lack thereof) works.


That's it really. The only things which makes an AS/400 DASD special are:
  • custom VPD
  • 520/522 byte sector size based on model
  • SCSI FORMAT being implemented properly
  • SKIP READ and SKIP WRITE being implemented properly

I hope this helps some people troubleshooting their failing hard drives.

Getting files onto my PS/2 50Z

Yvan Janssens
March 10 2019 01:03:11 PM

The usual problem most people face with old PCs is the dreaded file transfer challenge. I am aware that USB floppy drives like
these exist, however, they're less than ideal. They're okay for transferring individual files, but they often mess up when you try to write disk images to them. Writing the Windows 95 DMF floppies with those is completely out of the question even.

I've stumbled upon FastLynx by accident while trying to solve another issue. It works really well to transfer files using a null modem serial cable to a DOS machine, and allows you to use a 'bootstrap mode' to copy the client to your target device without being able to write any floppies. It also works reasonably well on Windows 10 with USB to Serial adapters. There's a free demo available which is enough to transfer a handful of files (you would probably want to ZIP them anyway and run them through PKUNZIP on the target device to reduce file size since RS232 is quite slow).

Download link for demo (in case the original website disappears): fx33demo.exe

Fixing my PS/2 Model 50Z floppy drive

Yvan Janssens
March 10 2019 12:35:13 PM
Recently I acquired a second PS/2 Model 50 (I already had a Model 50Z), which was listed on eBay for spares. The machine was indeed dead (well, beyond reasonable repairs, but that's another story), but it did came with a bunch of parts I was looking for to use on my main PS/2:
  • 386 upgrade CPU
  • More memory
  • Spare ESDI drive (you always need spares of these)
It also had a FDD. The case was badly damaged so there wasn't much to recover from that, however, the front plate for the floppy drive was in good shape. As a result, I decided to add this drive to my main PS/2 to have a second floppy drive (which is always useful, especially when dealing with reference diskettes). However, the drive from the other PS/2 didn't seem to work. It got detected, but it didn't seem to be able to read or write floppies. Last weekend I decided to take it apart to give it a good clean and inspect for potential failures, and I noticed this switch:

Image:Fixing my PS/2 Model 50Z floppy drive
Sony MP-F77W drive, IBM FRU 72X8523, IBM model MFD-77W

This switch has four positions, and I discovered the following outcomes:

Position 0: not working; machine hangs during IPL and doesn't even boot the reference diskette from the first drive (which is a known working drive)

Position 1: drive gets detected, failure mode as described above

Position 2: drive works as expected.

I successfully wrote a floppy using this drive and read it in the other drive and vice-versa.