Most Powerful Open Source ERP

GrandeNet - The Internet on Steroids

GrandeNet can be used to optimize connectivity between computers distributed around the world by creating a mesh network which can be even faster them the Internet itself.
  • Last Update:2019-01-08
  • Version:002
  • Language:en

The Great Slowdown

As many countries and ICP providers tend to optimize their "internal" data access by applying tools to control or monitor users, the overall network condition deteriorates and performance can be reduced to significant levels. This situation leads to considerable increases in the cost for companies who want to provide applications to end users. Our research has shown that the Internet speed - not only in China - varies more than expected on a global scale and quite often, the Internet does not use optimized routes leading to temporary or permanently slow access to some applications or for a group of users. GrandeNet can optimize up to 80% of the connectivity of certain servers by using Re6st and maintaining optimized and stable routes between all connected servers.

Can you Re6st?

Re6st is a multiprotocol random mesh generator that uses the Babel routing protocol to discover optimizal routes between each point in the mesh. It supports IPv6 and IPv4 with RINA support coming soon. It is commercially used by VIFIB our distributed cloud provider helping to solve the current lack of reliability of Internet connectivity for distributed enterprise applications due to bugs in routers, packet inspection breaking TCP protocol, government filters filtering too much, etc. Without re6st, it would have been impossible to deploy critical business applications used by large companies (Mitsubishi, SANEF, Aide et Action, etc.) on a decentralized cloud. It would also be impossible to manage the deployment of distributed cloud in Brazil, China or Ivory Coast where the Internet is even less reliable.

IPython Notebook

IPython Notebook is a web-based interactive computational environment for creating Executable Notebooks with Embeeded Python Code. IPython Notebook is largely used by researchers to produce and share their scientific work. We chose to use IPython Notebook for this article to provide a transparent walkthrough of what we are doing.

This article is fully reproducible by importing this notebook on your IPython Notebook Instance.

In order to execute the this notebook we need some well-known python libraries, specifically, panda, numpy, scipy and matplotlib. Below are the imports required to initialize the necessary libraries.

In [1]:
%pylab inline
import sys
from matplotlib import pyplot
import matplotlib.pyplot as plt
#from mpl_toolkits.basemap import Basemap
from IPython.display import display, clear_output
from IPython.core.display import HTML 

import pandas as pd
import numpy as np

from pandas import rolling_median

np.set_printoptions(threshold="nan")

pd.set_option('display.max_rows', 2000)

from pandas import Series, DataFrame, Panel
 
Populating the interactive namespace from numpy and matplotlib

Next, we'll define the core code - written as methods which will perform the Data Collection and the Calculation of the results for this article. If you are not interested in Code, you can move directly to the next section of this article.

In [2]:
import urllib2

def load_json(url, average="internet_ipv4", packet_lost="internet_ipv4_packet_lost"):
  """ Download JSON and normalize the data by:
        - Replace failed by a number '0', so the column is a float and not string
        - Replace 'average' and packet_lost by another name 
            for help concat 2 numpy array w/o recreate it.
        - Remove latest empty line.
  """
  req = urllib2.Request(url)
  response = urllib2.urlopen(req)
  content = response.read()
  return '[%s]' % content\
    .replace('"failed"', "0")\
    .replace('"average"', '"%s"' % average)\
    .replace('"packet_lost"', '"%s"' % packet_lost)\
    .replace("\n", ",")[:-1]

def load_test(id, date_id):
  """ Load Test results from Distributed Monitoring Tool
      and transform into DataFrames """
    
  # Load JSON for ICMPv6
  ping6_as_jsonstring = load_json(
    log_dict[id]["grandenet_ipv6"] % date_id, 
    average="grandenet_ipv6",  packet_lost="grandenet_ipv6_packet_lost")

  # Load JSON for ICMPv4  
  ping_as_jsonstring = load_json(
    log_dict[id]["internet_ipv4"] % date_id, 
    average="internet_ipv4", packet_lost="internet_ipv4_packet_lost")

  return pd.read_json(ping6_as_jsonstring, convert_dates=["time"]), \
    pd.read_json(ping_as_jsonstring, convert_dates=["time"])

def get_computer_list(dframeA, dframeB ):
    """ Extract all computer names at the DataFrames """
    return list(set([ computer_name[0] for computer_name in dframeA[["computer_name"]].as_matrix()] + 
                  [ computer_name[0] for computer_name in dframeB[["computer_name"]].as_matrix()]))

def get_computer_destination_label(dframeA):
    """ Determinate the Label Name for the computer which are receiving the ping"""
    return getComputerLabel([computer_name[0] for computer_name in dframeA[["name_or_ip"]].as_matrix()][0])

def getComputerLabel(computer_name):
    """ Translate hostname, ip addresses into meaningfull names for better understanting"""
    return server_label.get(computer_name, computer_name)

# Initiallization function which are going to be used for 
# collect the logs and transform them on Data Frames.
def plot_ping_comparation(df_ping6, df_ping):
  """ Function to load, plot and compare 2 Data Frames 
  """
  computer_list = get_computer_list(df_ping, df_ping6)

  computer_destination_label = get_computer_destination_label(df_ping6)

  measured_average = []
  packet_lost = []
    
  for computer_name in computer_list:
    
    if getComputerLabel(computer_name) == computer_destination_label:
      continue

    df6 = pd.DataFrame(df_ping6[df_ping6["computer_name"] == computer_name][df_ping6["grandenet_ipv6"] > 0][["time", "grandenet_ipv6"]])    
    df4 = pd.DataFrame(df_ping[df_ping["computer_name"] == computer_name][df_ping["internet_ipv4"] > 0][["time", "internet_ipv4"]])

    # Use Moving average in order to eliminate noise spikes on the chart and measurement.
    df6['grandenet_ipv6'] = rolling_median(df6['grandenet_ipv6'], window=3, center=True)
    df4['internet_ipv4'] = rolling_median(df4['internet_ipv4'], window=3, center=True)
    
    label = "'%s' to '%s'" % (getComputerLabel(computer_name), computer_destination_label)

    if 0 in [len(df6), len(df4)]:
      print "Found one empty array for %s" % label
      continue
    
    df = pd.DataFrame(pd.concat([df6, df4]))

    if SHOW_ALL_CHARTS:
      df4.plot(x="time", title=label + " (lower is better)", 
               sort_columns=["time"], figsize=(20,6))
      df6.plot(x="time", title=label + " (lower is better)", 
               sort_columns=["time"], color='r', figsize=(20,6))
    
    df.plot(x="time", title=label + " (lower is better)",
            marker='o', color=["b", "r"], figsize=(20,6))
    
    # Ignore 0 entries as it represents a full failure (so no average).
    ipv6_mean = df6["grandenet_ipv6"].mean()
    ipv4_mean = df4["internet_ipv4"].mean()

    grandenet_ipv6_packet_lost = df_ping6[df_ping6["computer_name"] == computer_name]["grandenet_ipv6_packet_lost"].mean()
    internet_ipv4_packet_lost = df_ping[df_ping["computer_name"] == computer_name]["internet_ipv4_packet_lost"].mean()
 
    if ipv6_mean < ipv4_mean:
      improvement_ratio = float(ipv4_mean - ipv6_mean)/ipv4_mean
      state = "OPTIMIZED in %sms (%.2f%%)" % ((ipv4_mean - ipv6_mean), improvement_ratio*100)
    elif ipv6_mean < (ipv4_mean + max(20, ipv4_mean*0.15)):
      state = "OK (in acceptable range %s < %s < %s)" % (ipv4_mean, ipv6_mean, (ipv4_mean + max(20, ipv4_mean*0.15)))
    else:
      state = "BAD (%sms slower)" % (ipv6_mean - ipv4_mean)

    measured_average.append({"name" : "'%s' to '%s'" % (getComputerLabel(computer_name), computer_destination_label),
                             "grandenet_ipv6": ipv6_mean,
                             "internet_ipv4": ipv4_mean,
                             "state": state})

    
    if grandenet_ipv6_packet_lost < internet_ipv4_packet_lost:
      loss_state = "OPTIMIZED (Better Packet Lost rate)"
    elif grandenet_ipv6_packet_lost == internet_ipv4_packet_lost:
      loss_state = "OK (Same Packet Lost rate)"
    elif (grandenet_ipv6_packet_lost - internet_ipv4_packet_lost) < 1:
      loss_state = "OK (less them 1% diference is considered same)"
    else:
      loss_state = "BAD (Worst Packet Lost rate)"

    packet_lost.append({"name" : "'%s' to '%s'" % (getComputerLabel(computer_name), computer_destination_label),
                             "grandenet_ipv6_packet_lost": grandenet_ipv6_packet_lost,
                             "internet_ipv4_packet_lost": internet_ipv4_packet_lost,
                             "state": loss_state})

  return pd.DataFrame(measured_average), pd.DataFrame(packet_lost)

Measuring Performance with SlapOS Distributed Monitoring

The core of GrandeNet Infrastructure is based on servers distributed on multiples cloud providers (Amazon, Qincloud, OVH, Rackspace, UCloud...) as well as standalone machines distributed on companies offices and/or people's home. Customers may add their servers located on their premises or even at their homes to be used as their main production servers.

This hybrid and heterogenious infrastrucuture of GrandeNet uses SlapOS to manage and monitor all distributed servers around the globe.

In this article we used a small set of servers (12) with public IPv4 running SlapOS Distributed Monitoring. Each server tries to contact (using ICMP Protocol) all other 12 servers using IPv4 and IPv6 addresses. Tests are performed 10 times (10 pings) every 10 minutes and we get the average and packet loss for testing and comparison.

The image bellow ilustrates the tests with using just 3 servers:

GrandeNet Connectivity Example

Below we initialize the location for each servers' logs along with labels to improve the readability of the charts and results.

In [3]:
server_label = {
  'i-j0dshts2': "Guanghouz - Qincloud",
  '10-13-16-6': "Guanghouz - UCloud",
  'i-wbs0d67i' : "Hongkong - Qincloud 1",
  'i-vutfghrs': "Hongkong - Qincloud 0",
  'i-hf0f7ocn': "Beijing - Qincloud",
  'vps212661.ovh.net': "Strasbourg - OVH",
  'ip-172-31-30-97': "Singapour - Amazon",
  'ip-172-31-6-206': "Tokyo - Amazon",
  'ip-172-31-8-66' : "Virginia - Amazon",
  'ip-172-31-7-155': "US West - Amazon",
  'cloud-server-grandenet' : "Hongkong - Rackspace",
    
  'COMP-9': 'US West - Amazon',
  'COMP-8': 'Singapour - Amazon',
  'COMP-7': 'Tokyo - Amazon',
  'COMP-6': 'Hongkong - Qincloud 1',
  'COMP-4': 'Hongkong - Qincloud 0',
  'COMP-2': 'Beijing - Qincloud',
  'COMP-3': 'Guanghouz - Qincloud',
  'COMP-10': 'Guanghouz - UCloud',
  'COMP-11': 'Strasbourg - OVH',
  'COMP-12': 'Virginia - Amazon',
  "COMP-13": 'Beauharnois - OVH',

  "frontend0.grandenet.cn": "Beijing - Qincloud",
  "2401:5180::1": "Beijing - Qincloud",
  "frontend1.grandenet.cn": "Guanghouz - Qincloud",
  "2401:5180:0:6::1": "Guanghouz - Qincloud",
  "2401:5180:0:9::1" : "Hongkong - Qincloud 0",
  "frontend3.grandenet.cn" : "Hongkong - Qincloud 0",
  "2401:5180:0:8::1" : "Hongkong - Rackspace",
  "frontend4.grandenet.cn" : "Hongkong - Rackspace",
  "2401:5180:0:7::1": "Hongkong - Qincloud 1",
  "frontend5.grandenet.cn": "Hongkong - Qincloud 1",
  "2401:5180:0:c::1": "Tokyo - Amazon", 
  "frontend7.grandenet.cn": "Tokyo - Amazon",
  "2401:5180:0:d::1": "Singapour - Amazon",
  "frontend6.grandenet.cn": "Singapour - Amazon",
  "2401:5180:0:10::1": "US West - Amazon",
  "frontend8.grandenet.cn": "US West - Amazon",
  "2401:5180:0:13::1": "Guanghouz - UCloud",
  "frontend9.grandenet.cn": "Guanghouz - UCloud",
  "2401:5180:0:16::1": "Strasbourg - OVH",
  "frontend10.grandenet.cn": "Strasbourg - OVH",
  "2401:5180:0:15::1": "Virginia - Amazon",
  "frontend11.grandenet.cn": "Virginia - Amazon",
  "2401:5180:0:17::1": "Beauharnois - OVH",
  "frontend12.grandenet.cn": "Beauharnois - OVH",

}


log_dict = {
  "Hongkong - Qincloud 0": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-314/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-314/ping/log.%s.log",
      },
  "Virginia - Amazon": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-322/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-322/ping/log.%s.log",
      },
  "Strasbourg - OVH": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-321/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-321/ping/log.%s.log",
      },
  "Guanghouz - UCloud": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-320/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-320/ping/log.%s.log",
      },
  "Tokyo - Amazon": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-317/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-317/ping/log.%s.log",
      },
  "US West - Amazon": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-319/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-319/ping/log.%s.log",
      },
  "Singapour - Amazon": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-318/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-318/ping/log.%s.log",
      },
  "Hongkong - Qincloud 1": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-316/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-316/ping/log.%s.log",
      },
  "Guanghouz - Qincloud": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-313/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-313/ping/log.%s.log",
      },
  "Beijing - Qincloud": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-312/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-312/ping/log.%s.log",
      },
  "Hongkong - Rackspace": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-315/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-315/ping/log.%s.log",
      },
  "Beauharnois - OVH": {
      "grandenet_ipv6": "https://softinst303.node.grandenet.cn/SOFTINST-625/ping6/log.%s.log",
      "internet_ipv4": "https://softinst303.node.grandenet.cn/SOFTINST-625/ping/log.%s.log",
      }
}

We also limit the scope of this article to the tests performed on a certain date range, shown below by the variable "DAY".

In [4]:
# Define here if you want more or less charts verbosity. Show all charts can
# make this report quite big.
SHOW_ALL_CHARTS = False

# Generate Report for the Jan, 28, 2016
DAY = "20160128"

Collecting Data from Distributed SlapOS Monitoring

In order to produce results for this article, we use the methods defined above, crawl the logs and turn them into dataframes. These dataframes contain the test results for the indicated period (DAY above).

In [5]:
hq0_df_ping6, hq0_df_ping = load_test(id = "Hongkong - Qincloud 0", date_id=DAY)
In [6]:
va_df_ping6, va_df_ping = load_test(id = "Virginia - Amazon", date_id=DAY)
In [7]:
gu_df_ping6, gu_df_ping = load_test(id =  "Guanghouz - UCloud", date_id=DAY)
In [8]:
sa_df_ping6, sa_df_ping = load_test(id = "Singapour - Amazon", date_id=DAY)
In [9]:
hq1_df_ping6, hq1_df_ping = load_test(id = "Hongkong - Qincloud 1", date_id=DAY)
In [10]:
hr_df_ping6, hr_df_ping = load_test(id = "Hongkong - Rackspace", date_id=DAY)
In [11]:
wa_df_ping6, wa_df_ping = load_test(id = "US West - Amazon", date_id=DAY)
In [12]:
go_df_ping6, go_df_ping = load_test(id = "Strasbourg - OVH", date_id=DAY)
In [13]:
ta_df_ping6, ta_df_ping = load_test(id = "Tokyo - Amazon", date_id=DAY)
In [14]:
gq_df_ping6, gq_df_ping = load_test(id = "Guanghouz - Qincloud", date_id=DAY)
In [15]:
bq_df_ping6, bq_df_ping = load_test(id = "Beijing - Qincloud", date_id=DAY)
In [16]:
bho_df_ping6, bho_df_ping = load_test(id = "Beauharnois - OVH", date_id=DAY)

Internet IPv4 vs Grandenet IPv6

Using the dataframes we can visualize a comparison of the response time (in milliseconds) between using Internet IPv4 (red) vs Grandenet IPv6 (blue). As we are using the ICMP Protocol to measure the response time, the charts below use the name "ping" for IPv4 and ping6 for IPv6 and highlight the differences between the Internet IPv4 and the IPv6. The smaller the response time, the lower the plotted line, the better.

In [17]:
hq0_average_dataframe, hq0_packetloss_dataframe = plot_ping_comparation(hq0_df_ping6, hq0_df_ping)
 
/srv/slapgrid/slappart8/srv/runner/software/9a8d67b31671ba36ac107c65a141c073/develop-eggs/pandas-0.16.2-py2.7-linux-x86_64.egg/pandas/core/frame.py:1825: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
  "DataFrame index.", UserWarning)
 
 
 
 
 
 
 
 
 
 
 
In [18]:
va_average_dataframe, va_packetloss_dataframe = plot_ping_comparation(va_df_ping6, va_df_ping)
 
 
 
 
 
 
 
 
 
 
 
In [19]:
go_average_dataframe, go_packetloss_dataframe = plot_ping_comparation(go_df_ping6, go_df_ping)