kian bradley's blog

Kian Bradley's blog

How to Solo-mine WhaleCoin

WhaleCoin is a new cryptocurrency designed to power a decentralized social network. It launched on September 4th with mining open to all from the beginning.

WhaleCoin is a fork of ethereum, and the project has their own version of geth, called gwhale. It's new enough that (as of September 2017) you can still profitably solo-mine, without using a pool. You'll need to

  • generate an address and private key (or, use an existing Ethereum address/key)
  • start a gwhale node
  • change some settings in gwhale to set the payout address
  • connect to your node, and start mining with Ethminer

This tutorial assumes you're using Linux.

Building gwhale

There are no prebuilt binaries for gwhale, so you'll have to build it yourself.

Update, then install go:

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install build-essential
curl -O
tar -xvf go1.8.3.linux-amd64.tar.gz
sudo mv go /usr/local

Modify ~/.profile to add the go binary to your PATH:

vim ~/.profile

Add this line to the bottom of the file:

export PATH=$PATH:/usr/local/go/bin

Save the file, then source it:

source ~/.profile

Finally, build WhaleCoin

git clone
cd WhaleCoin
make gwhale

Set up wallet in gwhale

You can either create a new wallet, or import an existing one.

Creating a new wallet

./build/bin/gwhale account new

Importing an existing private key

Create a file containing the private key:

vim ~/privatekey.txt

Then, import it

./build/bin/gwhale import ~/.privatekey.txt

Importing an existing JSON account file

Simply copy the .json file to ~/.whalecoin/keystore/ (you may have to create this directory if it doesn't exist yet.)

Starting and configuring gwhale

Starting the node

Start the node with RPC, allowing connections from localhost only:

./build/bin/gwhale --rpc --rpcaddr "" --rpcport "8545" --rpccorsdomain "*" --rpcapi "personal,db,eth,net,web3"

Alternatively, you can allow connections from remote machines. (if you do this, be careful! only do this if your network is firewalled, as anyone can access your wallet!)

./build/bin/gwhale --rpc --rpcaddr "" --rpcport "8545" --rpccorsdomain "*" --rpcapi "personal,db,eth,net,web3"

Configuring for mining

Open a new terminal window, enter this command to connect to gwhale:

./build/bin/gwhale attach

Enter this command to set your payout address (also known as the etherbase, or coinbase):

miner.setEtherbase("YOUR ETH ADDRESS")

And optionally, set the extra data included with each block. By default this is something like 'geth/go1.7.4/linux', but you can change it if you want.


The full list of commands is available here:

Connecting with ethminer

Download the latest version of ethminer, if not already installed:

extract it

tar xvf ethminer-0.12.0rc1-Linux.tar.gz

And finally, start solo mining. If gwhale is running locally, use 'localhost' for the ip.

./bin/ethminer -G -F http://[ip address of node]:8545

How do I tell if I mined a block?

Whalecoin explorer

The easiest way to tell is by keeping and eye on your address at

Gwhale output

In the gwhale node output window, you'll see the following:

INFO [09-12|09:39:32] mined potential block                  number=23791 hash=210ec7...eb085a

And then, once it is accepted into the blockchain:

INFO [09-12|09:41:29] block reached canonical chain          number=23791 hash=210ec7...eb085a

Ethminer output

In the ethminer output window, you'll see the following:

09:39:09|ethminer  Solution found; Submitting to http://localhost:8545 ...
09:39:09|ethminer    Nonce: 3976102000210460399
09:39:09|ethminer    headerHash: 2d5cae94000ce88c84775fb902d08fa4d651890cbf47bdad40c63f63dfd12acc
09:39:09|ethminer    mixHash: 1c11ed774735393c8ba9c1de2448f0035e1fe3170913455e31ef27e1c08025e6
09:39:09|ethminer  B-) Submitted and accepted.

posted 2017-09-13 22:54:29

Setting up Xen in Ubuntu 16.04

Xen is the future, you guys. While KVM has very good support and widespread use, the fact that it exists as a Linux kernel module means it runs as basically another process under linux, with all of the scheduling issues and limitations that come along with being a process. Xen works by "pinning" the host and guest operating systems to specific cores, allowing for much greater separation of guests. In Xen, the guest is running alongside the host, instead of under it. The host, aka "dom0", sits meekly alongside with the permissions to administer guests.

Install the Xen hypervisor

sudo apt-get install xen-hypervisor-amd64

Change Ubuntu's grub bootloader to customize how Xen boots. The following gives Xen dom0 1 cpu, "pins" it (cpu assigned to dom0 won't change), and gives 4gb memory.

sudo vim /etc/default/grub

add the following line:

GRUB_CMDLINE_XEN_DEFAULT="dom0_max_vcpus=1 dom0_cpus_pin dom0_mem=4G,max:4G"

update grub:

sudo update-grub

Create a disk for use with Xen

This can be done in several different ways. Here I use LVM to create a new logical volume.

Basically, you'll figure out what the name of your existing volume group is, then add another logical volume into that. List volume groups with

sudo vgs

Mine is called pcp-d-15-vg. I create a 16gb logical volume with the name xen_1:

lvcreate -L 16G pcp-d-15-vg -n xen_1

More information on using LVM:

Making a config file

There are a lot of options that go into making a xen cfg file. Below is provided a basic config with some explanations, but google around as needed to get a better understanding.

PV or HVM?

There are two ways to run Xen: HVM or PV mode. HVM stands for Hardware Virtualization Mode, and PV stands for Paravirtualized. Traditionally, HVM provided more efficient emulation, as it gave the guest more direct access to hardware; paravirtualization provides a "paravirtualized" interface for the guest to run on, and requires the guest have paravirtualized driver support. Recently, better paravirtualized driver support in Linux and better interaction between Xen and hardware virtualization has led to paravirtualized mode actually being the better option over HVM. (Interestingly, one of the biggest places PV shines over HVM is in page table and TLB virtualization; see

sample xen.cfg

I recommend you follow this guide on how to set up a new Ubuntu guest using their bootloader code. If you already have a prepared disk image, skip the kernel and ramdisk images and go ahead and uncomment bootloader.

tsc_mode is something complicated to do with the emulation of x86 timer instructions. read more here:

name = "example ubuntu guest"
# memory in megabytes
memory = 2048
# number of cpus, which cpus this guest is pinned to
vcpus = 4
cpus = "5-8"

tsc_mode = "native"

kernel = "/var/lib/xen/images/ubuntu-netboot/trusty14LTS/vmlinuz"
ramdisk = "/var/lib/xen/images/ubuntu-netboot/trusty14LTS/initrd.gz"
#bootloader = "/usr/lib/xen-4.4/bin/pygrub"

disk = ['/dev/pcp-d-15-vg/xen_1,raw,xvda,rw']

# see my xen networking article for info on how to set up networking: [/setting-up-nat-networking-in-xen-using-virsh.html](setting-up-nat-networking-in-xen-using-virsh.html)

Starting up Xen

The Xen control program is called xl. Given config file "xen.cfg", start up a guest domain like

sudo xl create xen.cfg

If it works, it will have started in the background and you will need to attach to the guest's console in order to control it. First you'll need the guest domains' id (domid). List domain IDs by typing

sudo xl list

Attach to the console by typing

sudo xl console DOMID

You can then detach from this console with the hotkey ctrl-] (control and left bracket).

A domain can then be shut down by issuing a instructing the guest operating system to shutdown (e.g. the Linux shutdown command), or using xl. Gracefully request an OS shutdown with the command

sudo xl shutdown DOMID

Force an immediate shutdown with

sudo xl destroy DOMID

posted 2017-09-13 22:54:29

Setting up NAT networking in Xen using virsh

There are two main ways to set up networking in Xen. You can use a bridged network, or you can set up NAT. A bridged network means that the guest domains will talk to the router directly to get an IP address. NAT networking creates a subnet local to your machine, and the guest domains will talk to dom0 to get an IP address.

Neither one is better than the other, really. Bridged networking is slightly simpler if you want something that just works. NAT-ing will create an internal network that allows for simpler local (domain-to-domain) communication and greater control over external communication. The downside is that you'll need to set up a static IP per guest and set iptables rules to allow for external communication.

Installing virsh

Install libvirt:

sudo apt-get install libvirt-bin libvirt0

Check that it's been installed, and that the default network is in place: virsh net-list --all

Set static IP, associate each IP with a mac address

Edit the default virsh config:

sudo virsh net-edit default

Under the tag, add a listing for each guest. The name can be whatever you want it to be. For the MAC address, the first 3 bytes should not be changed, this is the OUI assigned to the Xen project. The last 3 can be whatever you like. This is my DHCP configuration, with three guest domains configured:

    <range start='' end=''/>
    <host mac='00:16:3e:00:00:02' name='osv' ip=''/>
    <host mac='00:16:3e:00:00:03' name='ubuntu' ip=''/>
    <host mac='00:16:3e:00:00:04' name='rumprun' ip=''/>

Setting up a guest domain with NAT

standard xen cfg

In your Xen guest configuration file, add the following virtual interface, where mac corrosponds with the virsh configuration:

vif = ['mac=00:16:3e:00:00:03,bridge=virbr0']

rumprun unikernel

The rumprun unikernel is launched with the rumprun script. Here "newnet" is used internally by the script and can be set to whatever you like. rumprun_image.bin represents the baked rumprun binary you are running:

rumprun -S xen -id -I newnet,xenif,'bridge=virbr0,mac=00:16:3e:00:00:04' -W newnet,inet,dhcp rumprun_image.bin0

posted 2017-09-13 22:54:29

Running rumprun for Xen in Ubuntu 16.04

Running rumprun under Xen isn't hard, but it's less documented than running it under KVM. This page is similar to Rumprun's guide to building rumprun unikernels with a few Xen-specific changes.

Build the rumprun platform

Install prerequisite xen headers and build tools:

sudo apt-get install build-essential libxen-dev

Clone their repo, cd, build:

  git clone
    cd rumprun
    git submodule update --init
    CC=cc ./ xen

Add binaries to PATH

You've now build rumprun and the binaries necessary for building, baking, running are located in rumprun/bin. You'll want to these to your PATH variable for convenient access:

export PATH="${PATH}:$(pwd)/rumprun/bin"

You can also add this to your ~/.bashrc to make these changes permanent.

vim ~/.bashrc

Append the following, where [location of rumprun] represents the directory containing rumprun:

export PATH="$PATH:[location of rumprun]/rumprun/bin"

Building applications

Get some source code and use rumprun's version of gcc to compile it. (Follow the rumprun tutorial for a more thorough explanation...) Here, helloer.c is our source code and helloer-rumprun is the output binary.

x86_64-rumprun-netbsd-gcc -o helloer-rumprun helloer.c

Baking applications

I was going to make a joke here but I can't think of anything clever right now. You need to bake it. That means running a command to add in all the kernel-y bits that makes rumprun ready for it. Here, helloer-rumprun is the binary we just built and helloer-rumprun.bin is the the binary with the necessary rumprun pieces.

rumprun-bake xen_pv helloer-rumprun.bin helloer-rumprun

Running applications

Here's the hard part. The rumprun command is a script that will create a Xen configuration file in /tmp and start up a Xen PV guest. For xen, it will look like this:

rumprun -S xen -id -g [Xen config options] -I [network interface] -W [more network options]

The -I and -W commands can be omitted if there is no need for networking. I have networking set up using a NAT, in which there exists a subnet local to the machine. Look at my article on Xen networking to see how I set up networking within rumprun.

posted 2017-09-13 22:54:28