Random DNS Queries?

While sifting through my DNS query log, I came across these highly suspicious DNS queries:

query[A] urjhyay.local
query[A] tadzwrawzpuqdc.local
query[A] fgrxhztwffv.local
query[A] tadzwrawzpuqdc.local
query[A] urjhyay.local
query[A] fgrxhztwffv.local
query[A] qieosgbvzzhtt.local
query[A] qdjpfwezir.local
query[A] kzqwcaaq.local
query[A] qdjpfwezir.local
query[A] kzqwcaaq.local
query[A] qieosgbvzzhtt.local
query[A] prhwndine.local

While this looks like something maleware-ish, I found a plausible explaination for this weirdness, it seems to be related to the chrome browser:

Among those requests Chrome also tries to find out if someone is messing up with the DNS (i.e. “nasty”ISPs that have wildcard DNS servers to catch all domains). Chrome does this by issuing 3 DNS requests to randomly generated domain names, for every DNS extension configured.

Advertisements

Running Oracle Sun Ray 5.4 Software template in Proxmox

Aquire the Sun Ray software template (which is about to be discontinued) and unpack the template file.

Contents:

OVM_OL6U3_X86_64_SRS5.4_PVHVM.mf:  ASCII text
OVM_OL6U3_X86_64_SRS5.4_PVHVM.ova: POSIX tar archive
OVM_OL6U3_X86_64_SRS5.4_PVHVM.ovf: XML document text

Create a VM in proxmox with two ZFS volumes.
Read the OVM_OL6U3_X86_64_SRS5.4_PVHVM.ovf to get an idea on how much resources the machine should use.

Unpack the tar file:

OVM_OL6U3_X86_64_SRS5.4_PVHVM.ova

This will give you:

Product.img: gzip compressed data, was “Product.img”, last modified: Fri May  3 15:01:48 2013, max compression, from Unix

System.img:  gzip compressed data, was “System.img”, last modified: Fri May  3 14:59:13 2013, max compression, from Unix

These RAW images needs to be unpacked:

mv Product.img Product.gz && gunzip Product
mv System.img System.gz && gunzip System.gz

Now look at their properties:

System:                            DOS/MBR boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x8480e, GRUB version 0.94

Product:                           Linux rev 1.0 ext4 filesystem data, UUID=d0dd198a-031a-4dbe-8345-b411986f460e, volume name “Product-SRS54” (extents) (large files) (huge files)

Find the VM:s zfs volumes in /dev/zvol. For instance:

dd if=System of=/dev/zvol/vmpool/vm-107-disk-1 bs=1M

dd if=Product of=/dev/zvol/vmpool/vm-107-disk-2 bs=1M

When you boot your VM, you’ll probably encounter kernel panic because its unable to find the volume groups since the disks are named using xen standard (xvdb and so on)

Enter the grub menu, edit the last alternative in the menu. Remove “rhgb quiet” and add init=/bin/bash and then boot. Remount / as rw. Edit /etc/fstab and set /opt to the correct disk. Make sure /boot is correct as well.

Also adjust grub settings for new disks. Then you can enable public yum repository and install all the available updates and receive a new kernel that you can actually boot from. Make sure the grub config also points to the newest kernel.

The default sun ray admin password is 5r5demo and can be reset using 

/opt/SUNWut/sbin/utpw

If you need to restart the sun ray services, run:

Warm Restart

/opt/SUNWut/sbin/utstart

Cold Restart

/opt/SUNWut/sbin/utstart -c

Netbooting NetBSD on Sun Sparcstation 5 and Sun Ultra1

So, this is more a less a quick brain dump from my NetBSD installations on my sparcstation 5 and Ultra 1. I used a Raspberry PI 3 for DHCP/rarpd/tftp/nfs.

I based my installations on the following guides:

http://www.netbsd.org/docs/network/netboot/intro.sun.ofw.html
http://www.ogris.de/howtos/netbsd-sparc-install.html

Below I just added a few things I thought was missing or unclear from the guides.

/etc/ethers:

08:00:20:xx:xx:xx ss5
08:00:20:xx:xx:xx ultra1

At OK prompt, either start the netbooting with boot net or boot net:dhcp, either works usually.

For Sparc64 install (Ultra 1), when you are asked for the installation media and you choose “local media” then you need to type in :

Base Directory: /
Binary Set Directory: /sparc64/binary/sets/

I did not edit the part about source sets location.

Import FreeBSD 11 Qcow2 image into Proxmox VE 4.4

In this example, the following image is used

wget https://download.freebsd.org/ftp/snapshots/VM-IMAGES/11.0-STABLE/amd64/Latest/FreeBSD-11.0-STABLE-amd64.qcow2.xz

Download and then unpack it on the Proxmox host

xz -d FreeBSD-11.0-STABLE-amd64.qcow2.xz

Convert to RAW Image

qemu-img convert -f qcow2 -O raw FreeBSD-11.0-STABLE-amd64.qcow2 FreeBSD-11.0-STABLE-amd64.raw

Create a VM with scsi disk in Proxmox.

Find the  zfs volume that Proxmox has created for the vm

zfs list -t volume

Write the raw disk image into the zfs volume, vm-102-disk-1 in this example

dd if=FreeBSD-11.0-STABLE-amd64.raw of=/dev/zvol/vmpool/vm-102-disk-1 bs=1M

Now you should be able to boot FreeBSD and login as root without password.

Netinstall Solaris 8 on a Sun Ultra 1 from Raspberry Pi 3

This one was a real journey!

There are a few guides trying to explain how to perform a netinstall/netboot of Solaris 8 from linux. However, they all lacked the crucial parts of the procedure.

A few sources for this guide are:
http://znark.com/tech/solarisinstall.html
https://www.docbert.org/Solaris/Jumpstart/linux.html

Here is my go at a complete procedure. The goal is just to get the darn OS installed, you can install  fancy smancy packages later if you want.

First, acquire the Solaris 8 64bit iso from oracle/wherever.
They come as two zips:

p10356262_800_SOLARIS64_1of2.zip
p10356262_800_SOLARIS64_2of2.zip

You only need to extract p10356262_800_SOLARIS64_1of2.zip.
This will give you the following iso: sol-8-hw4-sparc-v1.iso
Put it somewhere on your RPI3.

RPI 3 Configuration

Setup the Netinstall Network

In this example, the RPI3 will have the ip 192.168.2.1 on its NIC eth0.
The Ultra 1 will have the IP 192.168.2.200

edit /etc/network/interfaces and configure eth0 like this:

iface eth0 inet static
address 192.168.2.1
netmask 255.255.255.0
network 192.168.2.0

Install and Configure Software

# Install the following packages
apt-get install rarpd nfs-kernel-server bootparamd

# Start a separate terminal for more easy monitoring and run
rarpd -e -v -d eth0

# Create the directory for the installation media
mkdir -p /pub/solaris

# Mount Solaris iso
mount -o loop sol-8-hw4-sparc-v1.iso /mnt/cdrom

# Copy all the files we need for the installation
find /mnt/cdrom -depth -print | cpio -pdmu /pub/solaris

Configure NFS

Open up and edit /etc/exports and add the following

/pub/solaris/ 192.168.2.0/24(ro,no_root_squash)

The SUN machine will be called harpocrates.

# Add harpocrates to host file on the RPI 3
echo “192.168.2.200 harpocrates” >> /etc/hosts

Setup bootparams

Edit /etc/bootparams and add the following (make sure its all on one line):

harpocrates  root=192.168.2.1:/pub/solaris/Solaris_8/Tools/Boot  install=192.168.2.1:/pub/solaris/ install_config=192.168.2.1:/pub/solaris/Solaris_8/Tools/Boot/usr/sbin/install.d/install_config/ sysid_config=192.168.2.1:/pub/solaris/Solaris_8/Tools/Boot/usr/sbin/install.d/install_config/sysidcfg rootopts=:rsize=32768 boottype=:in

Configure Jumpstart

It appears that the netinstall is using a graphical dialogue for just a few questions (or at least that it what happens for me), so we need to create an answer file that jumpstart will use.
Create the jumpstart file /pub/solaris/Solaris_8/Tools/Boot/usr/sbin/install.d/install_config/harpocrates and add (adjust the disc sizes as you wish):

install_type    initial_install
system_type     standalone
cluster         SUNWCall
package         SUNWaccr        add
package         SUNWaccu        add
package         SUNWgzip add
partitioning    explicit
filesys         c0t0d0s0         512    /
filesys         c0t0d0s1        2048    /var
filesys         c0t0d0s2         all    overlap
filesys         c0t0d0s3        2048    swap
filesys         c0t0d0s4        1024    /usr
filesys         c0t0d0s5        free    /local

Edit the file /pub/solaris/Solaris_8/Tools/Boot/usr/sbin/install.d/install_config/rules.ok, we will tell it to use our jumpstart profile for the sun4u architecture:

karch sun4u                             install_begin   harpocrates     patch_finish

Edit the file /pub/solaris/Solaris_8/Tools/Boot/usr/sbin/install.d/install_config/sysidcfg adjust as you wish, however, I’m not really sure if actually used or just partially. You get to set root password at boot and the previous IP settings are kept..hmm

system_locale=sv.UTF-8
timezone=MET
terminal=sun-cmd
timeserver=localhost
root_password=m4QPOWNY
network_interface=le0 {hostname=harpocrates
                       default_route=192.168.1.1
                       ip_address=192.168.1.98
                       netmask=255.255.255.0
                       protocol_ipv6=no}

The Important but Ugly Hack

We need to add a few lines to the startup file, otherwise you will receive error when the installers tries to transition from console to graphical environment. The error appears to be about the framebuffer device (its probably too lazy figuring out how to load the required modules on its own). The error is “/sbin/startup: /dev/fbs does not exist”.

I found the solution here.
It still spits out some errors, but it works.

Edit the file /pub/solaris/Solaris_8/Tools/Boot/sbin/startup and add the snippet below the comment field:

##########
# Make sure all configuration necessary is completed in order
# to run the window system

# Fix  module shit
mkdir /tmp/linkmod
cp -f /usr/lib/devfsadm/linkmod/* /tmp/linkmod/
ls -l /tmp/linkmod
devfsadm -l /tmp/linkmod/

Start the installer

Now make sure all services are started/restarted after editing configs on your RPI3.

systemctl restart bootparamd
systemctl restart nfs-kernel-server

Now boot your Sun Ultra 1 into the “OK” prompt and run (notice the space before install):

boot net -v – install

Let it load for a while. When you hit “configured interface le0” it appears to freeze forever.
Just send a ping from your RPI3 to 192.168.2.200 and it will magically unfreeze and continue.

So, After a few minutes the  installation GUI should pop up with a loud beep.
You get to make a few decision. After setting timezone and confirm the settings a console window shows up which will run load the jumpstart profile.

Here I had loads of problems at first attempts, mainly because I tried to merge Solaris 8 CD1 and CD2 into one directory and somehow forgot to copy the CD1 product-directory at first. Got really messy. When such things happen or your nfs export is not working out you will most likely see a message like this “There is no valid Solaris product on the media /cdrom”.
It reads the .cdtoc file in /pub/solaris/ and there it finds the location to the Product directory and making sure some specific files exists there. 

So, after some hour due to really slow disc access, you’ll hopefully have a working Solaris 8 system on your Sparc Ultra 1!

Bonus: Quirks when installing additional packages after installation

Since my goal was just to the OS installed, I had to install all extra applications I wanted afterwards.
I created a Products-directory on my RPI3 containing all the contents from “Products” directory on the RPI3, then I shared this via NFS and mounted it on my Sparc.

When installing for instance, Netscape (called NSCPcom) from the share (pkgadd -d <full product directory path> NSCPcom) , the installer failed and reporting something like “Cannot open pkgadd: ERROR: class action script did not complete sucessfully”.

This is due to permission problems, there is a check script in the installer that probably does not have permissions to read your nfs mount. You can solve this quick and dirty by copying the files somewhere locally on your Sparc and then try to install it again.

Powerdns 4 – Update slaves after zone2sql

Powerdns has a very nifty tool to import BIND zone files into its database.
It has a problem though, your slaves won’t receive these new zones.

This happens because the zones are inserted as type “Native”, which means that you have to rely on SQL replication or some other way to transfer your new data to the slaves.

Luckily, the fix is rather easy, the following example uses MySQL.

Go to your master server and convert all domains to type MASTER so that pdns will start to notify its slaves about this fabolous happening.

Assuming your master database is called “powerdns-master”:

update `powerdns-master`.domains set type = ‘MASTER’

Now wait a few seconds or minutes and your slaves will recieve notifcations!

Install Oracle VM Manager / Ops Center on Centos 7.2

Oracle VM Manager and Ops Center can without problems (and without official support as well though) be installed on CentOs 7, even if the installers refuse it. Officially supported linux distributions are RHEL and OEL.

This is easily fixed.

VM Server

The installation program uses the python platform library to figure out which linux distribution we are running. It has lots of methods to figure this out, but interestingly the installer discovers thats the server is running CentOS because it reads all file names in /etc/ and stops at centos-release. Remove centos-release from /etc and it will continue until it finds redhat-release file instead and the installer prerequisites has been met.

  1. Temporarily remove the /etc/centos-release file from /etc directory.
  2. Start the VM Server installer.
  3. When installation is done, place the centos-release file in /etc again.

Ops Center

  1. Temporarily remove the /etc/centos-release file from /etc directory.
  2. echo “Red Hat Enterprise Linux Server release 7.2 (Maipo)” > /etc/redhat-release
  3. Start the Ops Center installer.
  4. When installation is done, place the centos-release file there again.
  5. Remove /etc/redhat-release and re-link redhat-release to centos-release

Zyxel NAS540 NFS exports all shares RW world by default ?

While poking around with nfs exports on my zyxel nas 540 I noticed that, it is useless to set any DN/IP filter since the whole nfs directory is exported world wide RW as:

/i-data/<disk id>/nfs *(rw,sync,crossmnt,fsid=0,no_subtree_check,wdelay,no_root_squash) #

This share is not visible from the web interface but can easily be confirmed using showmount on any other system on the network:

showmount -e nas_ip

Export list for nas_ip:

/i-data/<disk id>/nfs               *
/i-data/<disk id>/nfs/kitties      192.168.1.145/24

So you better comment out /i-data/<disk id>/nfs * line in /etc/exports and then run:

/i-data/sysvol/.PKG/NFS/bin/exportfs -r

You get what you pay for I guess.