Posts

Showing posts from March, 2014

Vertx Polyglot server - Java, JavaScript, Ruby, Python, Groovy, Scala

http://vertx.io

Vertx appears to outperform node.js by >= 2x for basic socket handling and also when serving small static pages

RESTful service

Shark SQL for Hadoop, HBASE, Cassandra

Shark (SQL) uses
Apache Spark which can read from hadoop, HBASE, Cassandra

Ansible and Salt

Ansible and Salt are frameworks that let you automate various system tasks. The biggest advantage that they have relative to other solutions like Chef and Puppet is that they are capable of handling not only the initial setup and provisioning of a server, but also application deployment, and command execution.

Ephemeral ports

Ubuntu only uses ports >32768 for outgoing connections

/usr/src/linux/Documentation/networking/ip-sysctl.txt

ip_local_port_range - 2 INTEGERS

        Defines the local port range that is used by TCP and UDP to
        choose the local port. The first number is the first, the 
        second the last local port number. Default value depends on
        amount of memory available on the system:
        > 128Mb 32768-61000
        < 128Mb 1024-4999 or even less.
        This number defines number of active connections, which this
        system can issue simultaneously to systems not supporting
        TCP extensions (timestamps). With tcp_tw_recycle enabled
        (i.e. by default) range 1024-4999 is enough to issue up to
        2000 connections per second to systems supporting timestamps.

On prem - cloud managment like AWS

Java REST service framework - DropWizard

DropWizard - lots of boilerplate
node.js would be far simpler

Open Cloud - Real-time Application Server (Rhino)

Share Ubuntu drive on Mac

Caching in-process vs distributed

Node.js performance tips

Single Writer Design - Mechanical Sympathy

Mechanical Sympathy

There is also a really nice benefit in that when working on architectures, such as x86/x64, where at a hardware level they have a memory model, whereby load/store memory operations have preserved order, thus memory barriers are not required if you adhere strictly to the single writer principle. On x86/x64 "loads can be re-ordered with older stores" according to the memory model so memory barriers are required when multiple threads mutate the same data across cores. The single writer principle avoids this issue because it never has to deal with writing the latest version of a data item that may have been written by another thread and currently in the store buffer of another core.

Intel 800Gbps interconnects

http://arstechnica.com/information-technology/2014/03/intels-800gbps-cables-headed-to-cloud-data-centers-and-supercomputers/

Mellanox OFED and Messaging Accelerator (VMA) RDMA/offloading

RDMA zero-copy

Seems Socket Direct Protocol was declared obsolete/deprecated by OFA

Alternatives are rsocket which can only be found referenced in IBM Java SDK JSOR

Other alternatives include VMA, iWarp, and RoCE all supported in OFED

VMA White Paper

Atlantic.net performance

uname -a
ubuntu01 3.2.0-23-generic #36-Ubuntu SMP x86_64

cat /proc/cpuinfo | grep model\ name | uniq
Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz

cat /proc/cpuinfo
cpu MHz : 1200.000
cache size : 15360 KB
cpu cores : 6

ping between 1 GbE ports ~0.267mS
ping between 40GbIB ports ~0.250mS

ibping between 40GbIB ports ~0.305mS

iperf between 1 GbE ports
root@ubuntu01:~# iperf -c 209.208.8.163 -P4 ------------------------------------------------------------ Client connecting to 209.208.8.163, TCP port 5001 TCP window size: 23.5 KByte (default) ------------------------------------------------------------ [ 6] local 209.208.8.162 port 52414 connected with 209.208.8.163 port 5001 [ 3] local 209.208.8.162 port 52412 connected with 209.208.8.163 port 5001 [ 4] local 209.208.8.162 port 52413 connected with 209.208.8.163 port 5001 [ 5] local 209.208.8.162 port 52415 connected with 209.208.8.163 port 5001 [ ID] Interval Transfer Bandwidth [ 5] 0.0-10.0 sec 377 MBytes 316 Mbits/sec […

Infiniband configuration and testing

Add Infiniband modules to /etc/modules

Either vi or echo/append them to /etc/modules
modprobe modules to dynamically load them without restart

### Set the IB modules they may wish to use (these are just some of the available modules, but should get them started):
IBMOD="ib_umad mlx4_core mlx4_ib mlx4_en ib_ipoib ib_ucm ib_uverbs ib_umad ib_cm ib_sa ib_mad ib_core ib_addr"

Load the modules (now and during next boot).
for i in $IBMOD ; do echo $i >> /etc/modules; modprobe $i; done

Install opensm sudo apt-get install opensm
Start opensm service opensm start
Check it's running ps aux | grep opensm
Determine the IB card model: lspci -d 15b3:
Query IB interface status ibstat ibstatus ibhost ibswitches iblinkinfo Install other software ibverbs-utils libibcm1 librdmacm1 libaio1
Add/configure interface  vi /etc/network/interfaces and add
auto ib0 iface ib0 inet static          address          netmask          network          broadcast          mtu 65520          pre-up echo co…