@borderlessjs/borderless

0.0.3-dev • Public • Published

Borderless

Framework for transparently running code anywhere across the system architecture.

Project Goal

The goal of this framework is to make architechtural / topology decisions about where the code runs at compile time (based on configuration) and at run time (based on client capabilities or operational needs) as opposed to hardcoding it during development.

We hope that this will make systems less rigid post-coding and simplify re-architecture and optimization.

Enviroments and their properties

Systems built using Borderless framework could support a wide variety of environments. We have our primary attention on web applications, however mobile application development (native and hybrid), IoT devices and other distributed systems might be included.

As extreme, mostly illustrative, goal, we'd also like to explore Interplanetary Computation where latencies and resource constraints are extreme and make it easier to realise the need for this approach.

Environment Location Payload Supported Flavor/Constraints APIs supported Communication protocols Latency to User (ms, estimated) CPU Availability (L/M/H) Power Availability (L/M/H) Storage Availability (L/M/H)
Rendering Pipeline Browser HTML + CSS 16 - 300 Low, Medium, High Low, Medium, High Low
Main Thread Browser JavaSript, WebAssembly Bundled, JS Modules Web APIs HTTP, postMessage 16 - 1000 Low, Medium, High Low, Medium, High Low, Medium
Web Workers Browser JavaSript, WebAssembly Bundled Subset of Web APIs (notably no DOM access) HTTP, postMessage 50 - 500 Low, Medium, High Low, Medium, High Low, Medium
Service Workers Browser JavaSript, WebAssembly Bundled HTTP (fetch), postMessage 50 - 350 Low, Medium, High Low, Medium, High Low, Medium
Edge Workers CDN (AWS Lambda@Edge, CloudFlare, Akamai, Fastly) JavaSript (Fastly also supports WebAssembly) Bundled, JS Modules HTTP 100 - 500 Low, Medium, High Low, Medium, High High
Serverless Cloud JavaScript (and many other) Bundled, JS Modules Subset of Node APIs, notably NO file access HTTP, some networking 200 - 1500 Low, Medium High High
NodeJS HTTP Server Server (on request) JavaScript, WebAssembly Bundled, JS Modules Node APIs, notably file access HTTP, any networking 300 - 1500 Medium, High High High
Build pipelines Server (on code / data change) JavaScript (and many other) Bundled, JS Modules Node APIs, notably file access HTTP, any networking n/a Medium, High High High

Interplanetary Computation

There are already massive Interplanetary Internet initiatives underway as space exploration missions start to unify communication between various elements of their space hardware.

We want to expand it from communication and storage to computation as well - primarily because the analogy with current web application topologies is clear and exhaggerated constraints would make it easier to imagine a need for Borderless computing.

Imagine that a Mars rover needs to execute some code, we have a few options here:

  • Execute it on the rover computer itself, where CPU, energy, storage capacity and temperatures of the environment are extremely constrained
  • Communicate with a local, on-Mars stationary computer (just imagine that we already have them) where energy needs are probably solved a bit better, environment is better controlled and with no need to move, larger equipment with more storage and CPU are deployed
  • Alternatively, it can communicate (as it currently does) with orbiting stations that have relatively good bandwidth as Mars atmosphere is thinner then Earths and communications are easier to intercept. Those stations also have larger solar panels and communication dishes because there is less debree and gravity making it easier to deploy
  • Those ground or orbital stations can also communicate with Earth's orbital or ground stations and request computation to be done further upstream
  • With lunar mission, we can imagine more intermediate hops becoming available as well

All these options or a combination of them can be chosen under certain conditions.

Some of this is currently in full operation as various space agencies of different countries are exploring the space, some of the solutions are already available and according to Vint Cerf's presentation at Chrome Univercity in the fall of 2020, internet protocols already had to be updated to support extreme latencies.

Extreme latencies require us to have very realistic and well defined data lifecycle / freshness policies and build our software and infrastracture around those needs.

Space missions in general require very sophisticated telemetry instrumentation because maintenance of the systems is extremely expensive and risk reduction is a very good cost reduction strategy.

All this helps us macro-model the needs that we similarly have in our earthly, micro-world where latencies similarly vary more than average software engineer can easily envision, data lifecycle / freshness policies can't just be shoved into "our data is always dynamic" bucket and we need to be building better developer experience to solve these issues.

So we encourage you to keep the interplanetary use-case in your mind when you discuss, architect and build Borderless framework and ultimately applications that it would power.

How Will It Work?

Developers use a language that can be executed / transpiled or compiled into the target running on all supported platforms. At the beginning we will use JavaScript, but WebAssembly can be another alternative for web, other languages compiled into target executables potentially too for non-web applications.

All the functions that can be called are annotated or "registered" providing all the metadata needed to make a decision which environments can support running it and which have to delegate to the upstream execution environment(or potentially a mesh in the future).

Initially, registration will have to happen in the code, but we can imagine a compiler that could read annotations and even do code analysis to deduce some of the properties of the code.

When code is to be deployed, in addition to source code, each project would include a configuraton file (topology.js) that defines execution environments available to run it and build infrastructure will be used to compile deployable packages for each of the target environments.

Each package will include the code that can run in that environment, transpiled with support for language properties of the environment (e.g. target format, ES modules support vs. bundled, polifills for APIs that can be polyfilled and etc.).

It will also include the environment configuration file environment.js (1,2) that dictates which function calls have to be fulfilled locally and which have to be routed through a communication protocol to a set of upstream locations (we'll support HTTP for network and postMessage for worker communication in the browser to start).

As code get wrapped into topology decision logic, it can also include speed instrumentation that would report telemetry data to the operations datacenter and help with visual understanding of code execution and will allow operators to modify and deploy topology changes as needed.

Same telemetry data could be used to dynamically change the topology.

Code registration requirements

  • Required APIs (e.g. Web APIs, databases, file/storage access)
  • Required latency range
  • Data lifecycle / freshness policies
  • Async execution as any part of the code might need to require to wait

Misc Notes

Below are some notes that should or should not be taken into account when designing a Borderless system.

Notes on Formats

Different parts of the topology can operate and produce different formats.

It is unclear how this plays into the overall framework and if it should abstract it away or concentrate on specific flows and transformations.

Here are a few use-cases that are currently are pretty clear and are widely used in the industry:

  • "Static-generation" workflow when Build pipeline generates HTML that is deployed to other environments and ultimately to rendering pipeline in the browser before user makes a request reducing latency to possible minimum. This use-case is popularized by JAMstack and got birth to services like Netlify that specializes in that
  • All environments can get code as input and produce HTML at runtime for browser rendering pipeline to consume - this is the classic web development
  • All environments can produce data (serialized in JSON, for example) which downstream environments can render (this includes non-HTML rendering environments like mobile apps or IoT devices and etc)
  • Build pipelines convert code in one language into destination languages and packages (traditional CI/CD) to be executed in other environments
  • Some environments can execute code in various languages
  • Some environments can execute WebAssembly which in turn can be a compilation target for some languages

Notes on moment of execution

This framework should unify various execution patterns in order to be able to convert code from one mode of operation into another based on performance requirements and data freshness requirements.

  • Some operations can execute code upon request so users get the latest and greatest data (traditional, 3-tier web development)
  • Some operations require real-time visualization and very low latency (e.g. gameplay)
  • Some operations can be done when data changes, but can be less than freshest (event-based build pipeline)
  • Some environments can have intermittent connectivity (e.g. progressive web apps, mobile apps) and should have flexible data policies and fallbacks for all, some, or no data (based on business functionality), but can produce a useful feedback for the user in as many cases as possible
  • Some operations can be performed in a batch manner because data freshness policy accepts large latency, but data volumes, CPU and power consumption are a large and require cost optimization (machine learning applications, vendor data sync, etc.)

Notes on mobile apps

Mobile applications and web applications share majority of the logic and data requirements, but have a significantly different rendering technologies and release cycles.

Notes on IoT devices

IoT devices usually have low rendering requirements, and some data consumtion requirements, but often concentrate on producing data and sending it back into central storage.

This "data source" behavior can also be included here because this potentially applies to other applications like telemetry or business analytics flows.

Notes on topology

We envision multiple types of topology decisions:

  • Configuration - similar to traditional operations / DevOps workflow that defines the systems code deploys to
  • Run-time scalability adjustments - similar to modern DevOps workflows that scales some types of environments up/down in order to support consumption needs or to selectively shut down parts of the system in case of an outage
  • Run-time decisions based on environment capabilities:
  • Progressively Enhanced Single-page Applications that use so-called Server-Side rendering (SSR) can use one topology to produce HTML for initial view, but another topology for subsequent views that use front-end routing
  • Progressive Web Applications (PWAs) use one topology for first request (when user has never been to the site), but another for subsequent requests when Service Worker (installed after first request) can take over some operations.
  • Run-time decisions based on users device capabilities, e.g. network speed, CPU power, battery levels, etc. Off-loading large computations to the server in case of low-powered devices or running them with much lower latency in web workers, for example (there multiple use-cases with data analysis and visualization, machine learning, media format processing and etc.)
  • Run-time decisions based on users location or content preferences, e.g. geo-fencing or language

Ultimately we'd want to be able to reproduce most of existing topologies that are well established in the industry. Here are several diagrams that illustrate existing topologies.

Machine Learning

Topology can also be dynamically optimized based on telemetry that comes from all the environments about execution speeds, failure rates and business outcomes.

Machine learning can be used, in analogy with how optimizing JIT compilers optimize code in browsers, topology optimizer can perform optimization of topology to minimize latency and/or to maximize business KPIs.

Participants

I say "we" when referring to the team that would work on this project, but so far this was only born in my brain.

I talked about it to a few people and hope to attract more eyes and brains to this project and hopefully, it will lead to the project's progress and success.

Feel free to reach out to me in the issue tracker if you have questions, comments, or suggestions.

When more people will start contributing to this project, I'll update this section and include you all below:

Readme

Keywords

none

Package Sidebar

Install

npm i @borderlessjs/borderless

Weekly Downloads

4

Version

0.0.3-dev

License

none

Unpacked Size

138 kB

Total Files

28

Last publish

Collaborators

  • sergeychernyshev