Monday, June 27, 2022

For VM junkies, the bridge to Docker goes through Bitnami

I have been spending time trying to wrap my head around Containers, mainly the Docker container. There are others that are up and coming, but since Docker is the most popular, understanding it will prepare you to understand the rest. It is not easy for me, coming from a VM background. Especially, understanding some of the ways that things work in containers versus how they work in a VM environment. Trying to model Dcoker from a VM perspective is the fastest way for me, but there are some major differences.

But I haven't stopped using VMs. In fact, a recent discovery of mine has shortened the distance from "I want to try this" to "I have it running to test things out". Bitnami makes and maintains VMs that can be downloaded to be used. Each VM provides a specific function, essentially, a dedicated system delivering a service. It is in the OVF format, making it fairly portable. However, I had problems importing it on an old ESXi because the OVF format has changed and there are 'extra files'.

I found a good web gateway to allow access from the Internet to a local server. It can be accessed over the web using a browser. Apache Guacamole is not a household name, but it offers access via SSH and Windows desktop through its web interface. Just click on a pre-defined link and it will bring you to the interface in the browser. 

I tried extracting the vmdk file (the disk file) and creating a VM around it. But the disk didn't like the way it was being booted and kept dumping me into EFI. A little reading made me aware that the Guacamole VM was running on Debian.. running GRUB, my mortal enemy. My clashes with it are here elsewhere, so I won't bore you. 

I then tried running it on a KVM host. Again, I unpacked the OVF, converted the VMDK to QCOW2 and created a VM around it. It worked straight out of the box. Bitnami VMs have a one-time startup sequence, and first time logging in does require a password change. But once the banners show how to connect to the Guacamole (or whatever service the VM is providing), it is intuitive to work with. Links and menu items can be spawned off into other tabs (showing a high degree of HTML compatibility). 

Making it work with SSH hosts is straightforward, and Windows Remote Desktop connections are not too difficult if you are the Admin. Windows requires some modifications to the server's Remote Desktop Connection server settings, but nothing that would cripple or make it more risky. 

I used to love SSL gateway devices before they were killed off by Java security updates and the lack of understanding by security professionals that always favoured VPNs. This gets it close to the connectivity level those devices used to provide.

Thursday, November 04, 2021

The Right Kind of Complex

I've always been interested in new technology. But I'm always worried about complexity for complexity sake. Now I know that some people push for this type of Technology simply to take advantage of it. By making it complex, they make it mysterious. When it's mysterious, it's magic. And when it's Magic, you can charge whatever you want. 

There are also those the want to be in an exclusive Club. And complex technology is a way to build a clubhouse where only those who understand are allowed in. Well, not really. Those who understand, but still don't need that criteria exclusivity still don't get in. And it seems that way with systemd. now before you go on and skip this because it's going to be another systemd rant, rest assured its not.
I want to talk about something that is appropriately complex, yet Rewards those who Brave its complexities. I I'm talking about docker. I've heard about it for so long in numerous technology podcasts. I heard the podcast where the inventors of Kubernetes begin to popularize it. Yet I found no occasion to use it. Fortunately, I can set up systems pretty fast and never needed before to look at it to improve my delivery cycle. I believe in forward planning, and leaving enough space to handle the unexpected.
However recently, I was pressed for time to deploy A system that used multiple nodes 2 process complex data. There was a front end, a node manager, a back-end component and the nodes themselves. The authors of the system very much encouraged deploying the system using docker. The system was very intriguing to me and it had components that I haven't worked with before. But there wasn't enough time.
There were the usual challenges of setting up a system, such as dealing with dependencies and outdated components. On top of that, the client requested to migrate the system from its original Linux distribution to a distribution that the organization is used to managing. After giving it a few tries (and failing), I decided to follow the strong suggestion by the authors and deployed it using Docker. The system was deployed in almost no time at all on the distribution that was favored by the client. I was taken aback at how simple the process was. I understand that the distribution really didn't change. It was more or less contained and sufficient enough to make the system work.
I decided to take a deep dive into Docker. I wanted to know enough to deploy other Solutions and to deploy Docker as a tool that I would regularly use. I found that my experience Building Systems allowed me to understand not only what was going on but also gave me an insight into the decisions made by the people who created the Docker images. I found that docker is complex but satisfyingly so given what the rewards of using Docker are. It is complex in the right way. It is complex because it needs to be complex. It isn't making something previously simple, complex for it's own sake. It rewards those who are willing to brave its complexities but still offer riches to those who are just intent on using the basic functions. It isn't Magic but it does seem so.
I have only begun my journey with Docker. The mysteries off building and managing my own images lie ahead. From where I stand, some parts does look complex and where isn't, the decisions and issues around those decisions are complex. I love learning about systems like this and passing on the knowledge to those coming up from behind me. I also like sharing the issues and working out with my clients the decision around those issues. That way, I make the magic less mysterious. It still is magic to them but sharing decision making process creates collaboration and acceptance.

Wednesday, May 19, 2021

A problem not big enough to solve?

In open source, 'scratching your itch' is a source of birth for many a project. It makes the assumption that someone who has a problem a.k.a. "itchy", has the resources (e.g. time, effort) to develop a solution (or scratch that itch). With so many open source solutions already built using this time-honored method, the issue nowadays is more of finding the project that "scratches your itch" than actually building one of your own. In fact, this has lead to a lot of dead projects, some of which were brilliant but lost in the shuffle. 

But it surprised me to discover an itch, a problem, that should have been so prevalent that someone should have done something about it. 

I was setting a new MSWindows10 environment at home and decided I needed to be able to access a Linux box remotely and do so while being able to run X11 applications remotely from the box. My go-to solution has been MobaXterm but since it is Freemium solution, I have always installed it with a caveat. I also didn't like it charging for was basically integration of existing open source solutions (in a way, at least). Okay, re-packaging. I remember seeing an alternative called mRemoteNG which sort of has the same features, is open source and expandable.

<a href="https://iconscout.com/illustrations/solved-the-problem" target="_blank">Solved the Problem Illustration</a> by <a href="https://iconscout.com/contributors/manypixels-gallery">Manypixels Gallery</a> on <a href="https://iconscout.com">Iconscout</a>
I downloaded a portable version and in no time was able to reach servers via SSH tunnels. It does rely on external applications, like Putty, to do the actual connection. But the presentation and configuration management features it provided was very much welcomed. Finally, I decided to use an X11 application on the server. MobaXterm has a built-in X11 server and using it was a no-brainer. But mRemoteNG has no documentaion for it. Even on-line, people did provide suggestions like using the VcXsvr X11 server for windows but no clear indication anybody has successful done so. Which is odd considering running an application from a linux box would be one of the things one would do after connecting to a Linux box. Or have we been disciplined enough to limit ourselves to command-line?


I used XMing back in the day but there are warnings that it doesn't run on Window10. There seemed to be a myriad of things to consider when setting up VcXsvr (e.g. display number, permission settings) before being able to run a single X11 application. Which is strange considering the X11 architecture was intended to allow complex X11 applications to run and consume resources on the server and just provide the UI to the user. 

So I chalk this up to an itch not itchy enough to solve. Nobody has done the work and shared the way to setup mRemoteNG with VcXsrvr. Someone has solved the connection part and the ability to run X11 on Windows. But no one has setup mRemoteNG together with VcXsvr. And that is a shame given that MobaXterm runs X11 apps straight from the ssh window. 

Maybe MobaXterm is the problem solved but nobody it willing to take the extra step and make an open source solution for it. Like I said, not itch enough.

Sunday, May 16, 2021

Pandemic and Mageia Madness

Fortunately, the pandemic has little negative impact for me. Working from remote, couped up in the house,endless remote meetings. Tell me something new. It's just more of it. And my exprience with work during the pandemic is opposite of most, I got even busier. New clients looking for a cheaper way of doing the same things. Companies looking at open source solutions mainly as a cost reduction option, suddenly okay with solutions that cost less even though it sticks out in their MsWindows environment. I'm not complaining but some days, I'm am at the edge of it.

 I've recently moved to the most recent version of Mageia. Well, forced to was the more likely. A distro upgrade broke and screwed up the loading of the kernel. I could try to fix it but decided to just start fresh. The data directories were in a different partition (best practice ever) and installing fresh would just mean I may had to deal with the configuration setting differences between the older KDE/Cinnamon with the most recent one (since it was reading my existing home folder). Long story short, it worked like a charm (other than my USB stick coming down with a case of bad blocks).

Then came the whole process of reconsidering the apps I really needed vs the apps I wanted (but almost never use). This is where my thoughts of the needs of the average user return. Does the average user use the apps I use or need? Do they use apps that I don't? This is important to me as a Linux advocate because if Linux doesn't meet the need of the average joe, then the adoption will always fall short. 

This was a slow start.

Wednesday, May 23, 2018

Thankful for past programmers of mature open source system

I am a big fan  of Bacula. It is an enterprise-class backup system with a huge amount of flexibility when it comes to it's setup and the setup of the backup jobs. It was built to handle and manage tapes and has the most flexible way of selecting and choosing which directories and files to backup.
I've since moved on to Bareos, mainly because they were adding features and functions to the backup system that moving with the times. The decision to build a web-based user interface also mae me gravitate to their orbit.
Recently, I've been setting up Bareos with MySQL version 8. The main takeway when working with MySQL version 8 is that MySQL now made some default choices to be more of a secure variety.
Which leads to problems installing Bareos. Basically, the scripts written to set up the database took advantage of the lower security requirements of the previous versions of MySQL.
So I set about figuring our what to change and where. The scripts that were setting up the database were in /usr/lib/bareos/scripts/ . There were 3 scripts, create_bareos_database, make_bareos_tables and grant_bareos_privileges. All these scripts called another script called bareos-config-lib at the start of the scripts which provided the base functions and parameters. Running the scripts would throw up an error. I needed to see what was the command executing that was producing the errors. In this case, the commands were being provided through the bareos-config-lib script which itself called other files and scripts.
After poking around, I decided to think like an open source programmer, looked at other files in the same directory and started reading the code for clues. I found another script called bareos-config which took in a parameter. The parameter was the functions called in the bareos-config-lib. So the bareos-config-lib file had a function called get_database_grant_privileges. So to see what function provided, I executed bareos-config get_database_grant_privileges which then provided the output of the commands that executed the function. bareos-config get_database_driver will tell me what database Bareos is configured for. bareos-config get _database_password provides me with the database password used by Bareos to access the database. And so on and so forth.
This is a sign of a mature open source project. A tool exists to validate input or commands, created for the use of other future programmers.  Now I know what my problem is exactly and I can fix it. 

Friday, August 11, 2017

Why I think SystemD is acting like a bully

It's seems that I post only to complain about Systemd. I have tons of other articles in draft but only the ones about systemd drives me hard enough to complete them and post them. My other post on SystemD took some time longer but this was written in record time.. So much so that I write as a form of release or therapy in the face of all the roadblocks systemd has put between me and what I want to accomplish.
Systemd is a way their developers of saying to all the guys who have been doing Linux longer than them, "Sorry, what you knew then, is worth nothing now. We are the next generation and we are changing the game." Typical talk from the young who thinks they know it all. It gets worse when you think they are also saying, "We saw something better and we are doing it that way." when what they really saw was Linux through a terminal session in Windows or a Mac. The behavior of pushing people up the stairs, even when you don't really know whats at the top. All they see is light but it could just as well be a cliff.
While the dev seemed to claim they were seeing the light, it's that odd that one of the main things systemd did was blind others to what it was doing by making other people jump hoops through journalctl. It kicked syslog to the curb even though it didn't do everything syslog did. Being opaque was the order of the day.
Systemd is a solution looking for a problem. Rather than building a layer on top of work done before, they decided to start over, which was ok but then said "It's my way or the highway." and "We'll redo the tools you made before but they'll only work with our stuff, our way." It doesn't necessarily improve anything, only just for completeness of control when it's just masking "I can't be bothered interfacing with anyone else's stuff". Systemd has a bully mentality and it probably rubbed off from the people who developed it. It also has a Windows mentality of "(more) complexity is the solution" and "security is what we do later".  Which telling of where the developers get their ideas from.
Basically, RedHat, who is funding this realizing this or not, gains the most. They can't control Linus and his kernel team. So, why not build a wall around the kernel and forcing everybody to go thru systemd to get daily things done. Linus can do what he wants in the kernel, RedHat controls the doors leading to the kernel. And a knock-on effect for them is more people require retraining because all that knowledge accumulated is worth less now with systemd.
Let's get this straight, systemd works when it gets out of the way, like in desktop distros. I run Mageia and it's wonderful. I don't have to deal with it directly. But when it makes previously simple tasks complicated forcefully, then we have a problem. If it changes stuff while I am configuring other things and claims "but I told you so in the logs that you have to enter 4 parameters to make it readable", we have a problem. Look, things aren't perfect and improvements are always welcome. And Linux people love to learn new stuff. But it's hard to compare when the first thing done is say "I'm the only one competing." And being forced to learn is always a turn-off.
But I guess we are living in times when bullying is ok.

Wednesday, May 17, 2017

Top 5 mistakes new programmers make while developing

Is programming an art or science? While numerous proofs can be made on programming languages on their properties, which puts it in the realm of science, inspiration is the source to a program that can be described as elegant as well as efficient.
Being such so, in the early years, programming is often a singular pursuit that offers great satisfaction. As a programmer matures and Interest become Vocation, the nature of programming changes. What was a lone effort is now collaborative. And while there was none in the past, deadlines constantly looms high over the programmer. This change has led many programmers down the same path of discovery and maturity. Some become disillusioned, others trudge on and often making the same mistakes those ahead of them.
Being a programmer myself, I am also guilty of some of the following mistakes. In no particular order,

1. Trying to fix the problem on their own. A remnant of the lone programmer mindset or the advent of the 'lone programmer against the world' world view. Often followed by the belief that nobody else can help or solve the problem but themselves. This despite knowing well someone else has walked along the path before they did. Solution: a. Always repeat to yourself: This is not a unique problem. Someone else has solved it or solved something similar to it. Look for that solution. b. Talk to someone. Sometime the act of telling someone one provides another perspective.  
2. Dismissing bugs as 'small' in front of others. Those 'small bugs' can get very big. Treat all bugs the same or through the same process.
3. Going down a rabbit hole. That is hyper-focusing on one issue which create more problems or needs changes elsewhere. Which goes recursively until a few hours is lost. Solution: Mandatory breaks where you stop thinking about the problem and have something to eat OR talk to someone about the problem and what you are doing.
4. Not familiar with the production OS platform and not giving a care. Assuming just because it runs on platform, it must on the other. Followed by the attitude that because it doesn't, it's the platform's fault. The big picture: The customer doesn't care. All they want to see is that it's done and running. While the platform developers are at fault, you still have to care for the target's platform. What other services does it offer?  Hoe do I use them. Classic example: while a lot know about ssh, not many have used scp despite both using the same platform and technologies
5. Assuming the most technical/complicated solution is the right one to match the difficulty of the bug. Because only such a solution is 'worthy' of this bug. In reality, the best solution is often the simplest. Wield Occam's Razor wisely and the path to the solution will present itself.

Recently Popular