avid learner and developer[Ben.randomThoughts,GReader.shared,Delicious.public].tweet
196 stories
·
1 follower

Announcing Docker 17.06 Community Edition (CE)

1 Comment

Today we released Docker CE 17.06  with new features, improvements, and bug fixes. Docker CE 17.06 is the first Docker version built entirely on the Moby Project, which we announced in April at DockerCon. You can see the complete list of changes in the changelog, but let’s take a look at some of the new features.

We also created a video version of this post here:

Multi-stage builds

The biggest feature in 17.06 CE is that multi-stage builds, announced in April at DockerCon, have come to the stable release. Multi-stage builds allow you to build cleaner, smaller Docker images using a single Dockerfile.

Multi-stage builds work by building intermediate images that produce an output. That way you can compile code in an intermediate image and use only the output in the final image. So for instance, Java developers commonly use Apache Maven to compile their apps, but Maven isn’t required to run their app. Multi-stage builds can result in a substantial image size savings:

REPOSITORY          TAG                 IMAGE ID                CREATED              SIZE

maven               latest              66091267e43d            2 weeks ago          620MB

java                8-jdk-alpine        3fd9dd82815c            3 months ago         145MB

Let’s take a look at our AtSea sample app which creates a sample storefront application.

AtSea uses multi-stage build with two intermediate stages: a node.js base image to build a ReactJS app, and a Maven base image to compile a Spring Boot app into a single image.

The final image is only 209MB, and doesn’t have Maven or node.js.

There are other builder improvements as well, including allowing use of build time arguments in the FROM instruction.

Logs and Metrics

 

Metrics

We currently support metrics through an API endpoint in the daemon. You can now expose docker’s /metrics endpoint to plugins.

$ docker plugin install --grant-all-permissions cpuguy83/docker-metrics-plugin-test:latest

$ curl http://127.0.0.1:19393/metrics

This plugin is for example only. It runs reverse proxy on the host’s network which forwards requests to the local metrics socket in the plugin. In real scenarios you would likely either push the collected metrics to an external service or make the metrics available for collection by a service such as Prometheus.

Note that while metrics plugins are available on non-experimental daemons, the metric labels are still considered experimental and may change in future versions of Docker.

 

Log Driver Plugins

We have added support for log driver plugins.

 

Service logs

Docker service logs has moved out of the Edge release and into Stable, so you can easily get consolidated logs for an entire service running on a Swarm. We’ve added an endpoint for logs from individual tasks within a service as well.

Networking

 

Node-local network support for Services

Docker supports a variety of networking options. With Docker 17.06 CE, you can now attach services to node-local networks. This includes networks like Host, Macvlan, IPVlan, Bridge, and local-scope plugins. So for instance for a Macvlan network you can create a node specific network configurations on the worker nodes and then create a network on a manager node that brings in those configurations:

[Wrk-node1]$ docker network create —config-only —subnet=10.1.0.0/16 local-config

[Wrk-node2]$ docker network create —config-only —subnet=10.2.0.0/16 local-config

[Mgr-node2]$ docker network create —scope=swarm —config-from=local-config -d macvlan 

mynet

[Mgr-node2]$ docker service create —network=mynet my_new_service

Swarm mode

We have a number of new features in swarm mode. Here’s just a few of them:

Configuration Objects

We’ve created a new configuration object for swarm mode that allows you to securely pass along configuration information in the same way you pass along secrets.

$ echo "This is a config" | docker config create test_config -

$ docker service create --name=my-srv —config=test_config …

$ docker exec -it 37d7cfdff6d5 cat test_config

This is a config

Certificate Rotation Improvements

The swarm mode public key infrastructure (PKI) system built into Docker makes it simple to securely deploy a container orchestration system. The nodes in a swarm use mutual Transport Layer Security (TLS) to authenticate, authorize, and encrypt the communications between themselves and other nodes in the swarm. Since this relies on certificates, it’s important to rotate those frequently. Since swarm mode launched with Docker 1.12, you’ve been able to schedule certificate rotation as frequently as every hour. With Docker CE 17.06 we’ve added the ability to immediately force certificate rotation on a one-time basis.

docker swarm ca --rotate

Swarm Mode Events

You can use docker events to get real-time event information from Docker. This is really useful when writing automation and monitoring applications that work with Docker. But until Docker CE 17.06 CE we didn’t have support for events for swarm mode. Now you docker events will return information on services, nodes, networks, and secrets.

Dedicated Datapath

The new –datapath-addr flag on docker swarm init allows you to isolate the swarm mode management tasks from the data passed around by the application. That helps save the cluster from IO greedy applications. For instance in you initiate your cluster:

docker swarm init —advertise-addr=eth0 —datapath-addr=eth1

Cluster management traffic (Raft, grpc & gossip) will travel over eth0 and services will communicate with each other over eth1.

Desktop Editions

We’ve got three new features in Docker for Mac and Windows.

GUI option to reset docker data without losing all settings

Now you can reset your data without resetting your settings

Screen Shot 2017-06-02 at 4.44.28 PM.png

Add an experimental DNS name for the host

If you’re running containers on Docker for Mac or Docker for Windows, and you want to access other containers you can use a new experimental host: docker.for.mac.localhost and docker.for.win.localhost to access open ports. For instance:

Login certificates for authenticating registry access

You can now add certificates to Docker for Mac and Docker for Windows that allow you to access registries, not just your username and password. This will make accessing Docker Trusted Registry, as well as the open source Registry and any other registry application fast and easy.

Cloud Editions

 

Our Cloudstor volume plugin is available both on Docker for AWS and Docker for Azure. In Docker for AWS, support for persistent volumes (both global EFS-based and attachable EBS-based) are now available in stable. And we support EBS volumes across Availability Zones.

For Docker for Azure, we now support deploying to Azure Gov. Support for persistent volumes through cloudstor backed by Azure File Storage is now available in Stable for both Azure Public and Azure Gov

 

Deprecated

 

In the dockerd commandline, we long ago deprecated the --api-enable-cors flag in favor of --api-cors-header. We’re not removing --api-enable-cors entirely.

Ubuntu 12.04 “precise pangolin” has been end-of-lifed, so it is now no longer a supported OS for Docker. Later versions of Ubuntu are still supported.

 

What’s next

 

To find out more about these features and more:

The post Announcing Docker 17.06 Community Edition (CE) appeared first on Docker Blog.

Read the whole story
seriousben
50 days ago
reply
Release of docker containing my first contribution :)
Canada
Share this story
Delete

An easter egg for one user: Luke Skywalker

1 Comment

Article URL: http://einaregilsson.com/an-easter-egg-for-one-user-luke-skywalker/

Comments URL: https://news.ycombinator.com/item?id=14653017

Points: 242

# Comments: 24

Read the whole story
seriousben
51 days ago
reply
Wow best Easter egg ever targeted to a single user. :)
Canada
Share this story
Delete

Why getting traction is so hard these days

1 Comment

The end of the cycle
One of the best essays written last year was Elad Gil’s End of Cycle? – referencing our most recent 2007-2017 run on mobile and web software, and the implications for investing, startups, and entrepreneurs. Although he doesn’t directly talk about it, the end of a tech cycle has major implications for launching new products, growing existing product categories, because of a simple thing:

It gets much, much harder to grow new products or pivot existing ones into new markets

The reason for the above is that there are multiple trends – happening right now – that impede growth for new products. These trends are being driven by the biggest players – Google/Facebook, et al – but also by the significant leveling up around of practitioners in design/PM/data/growth.

We’ll look at a couple trends in this essay, including the following:

  1. Mobile platform consolidation
  2. Competition on paid channels
  3. Banner blindness  = shitty clickthroughs
  4. Superior tooling
  5. Smarter, faster competitors
  6. Competing with boredom is easier than competing with Google/Facebook

These trends are powerful and critical to understanding why all of a sudden, entrepreneurs/investors are starting to get into many new fields (genomics, VTOL cars, cryptocurrency, autonomy, IoT, etc) in order to find new opportunities. After all, if you can’t grow in the existing markets, you very quickly need to get into new ones, as Elad describes:

One sign that technology markets often exhibit at the tail end of a cycle is a fast diversification of the types of startups getting funded. For example, following the core internet boom of the late 90s (Google, Yahoo!, eBay, PayPal), in early 2000 and 2001 there was a sudden diversification and investment into P2P and mobile (before mobile was ready) and then in 2002-2003 people started looking at CleanTech, Nanotech etc – industries that obviously all eventually failed from an entrepreneurial and investment return perspective.

Nanotech, cleantech, etc was the last cycle, and now we’re talking about the next one.

#1 Mobile platform consolidation
The new Google/Apple app duopoly is more concentrated, more closed, and far less rich (from a growth standpoint) as compared to web – which means that mobile is far more stagnant and harder to break into. App Store functionality like top ranking charts, “Essential” bundles of apps, editorialized “Featured App” sections, all help drive a winner-takes-all mobile ecosystem.

No wonder app store rankings have ossified over the years. Facebook and Google now control most of the Top 10 apps in the mobile ecosystem:

Source: Nielsen, Dec 2016

If you’re introducing a new app – whether unbundling a more complex app or launching a new startup – how do you break into this? There’s not a ton of organic opportunities. And the paid acquisition channels are getting saturated too.

#2 Competition on paid channels
Paying for acquisition is one of the key channels still available, if you can find the right untapped audience segments with high ROIs. This only works when prices aren’t bidded up and you don’t face too much competition for the same ad inventory. Unfortunately that’s not what’s happening.

For example, let’s look at some of the dynamics of Facebook increasing their revenue per DAU over the last few years:

This is driven by a number of factors, of course – relevance, targeting, ad unit engagement, etc. – but it’s also because competition is getting fiercer on Facebook ads, not less, which is evidenced by the rapid increase in the advertiser count as well as the increase in revenue per user. In 2017, Facebook counts over 5 million advertisers on its platform, up from 4 million in Q3 of last year and 2 million in 2015. During its Q1 2017 earnings call, Facebook told investors that it expected ad revenue was approaching a saturation point, despite major growth in Q1 2017 earnings as compared to 2016. It’s currently at 2 billion users, with 17% YoY user growth, and its ability to add more inventory depends increasing its user base, or increasing users’ time spent on Facebook.

#3 Banner blindness = shitty clickthroughs
Additionally, everyone’s getting smarter about growth, including consumers. Today, most invite systems no longer have the same novelty value or efficacy as they did 10 years ago (Dropbox’ give/get was novel when it launched), and consumers’ “banner blindness” extends far beyond actual display advertising to encompass referral systems and virality programs.

In Mary Meeker’s latest internet trends report, she reports that up to 1/3 of some countries are using ad blocking, and we’re quickly on our way to 600M internet MAU who can’t be reached by ads:

This is just the 2017 version of The Law of Shitty Clickthroughs, which I wrote about a few years ago, where I showed some stats indicating that email marketing open rates are on the decline:

… and that traditional banner CTRs seem to be asymptotically approaching zero:

These trends are troubling, and mean that these channels are getting less engagement per user, and we haven’t found amazing new channels to replace them.

#4 Superior tooling – which levels the playing field
At the same time as advertising is getting more crowded, there’s also increasingly widespread availability and adoption of tools like Mixpanel, Leanplum, Optimizely and others that close the gap on being data-driven at companies.

Ten years ago, we used to look at total registered users. Cohort analysis was a sophisticated approach, and we also didn’t have a sense for MAU, DAU or other more granular metrics. One of the killer features of Mixpanel is that it made understanding cohort-based retention turnkey. It used to take a real investment of engineers, data scientists, and know how to be able to create simple graphs like this:

Now, it’s pretty much turnkey. You can get this chart from Mixpanel (and may others!) practically for free, as soon as you implement your analytics tracking.

In B2B, we’re seeing the same phenomenon. Outbound used to be painstaking and manual. Today, there are many sales tools that make outbound more accessible (Mixmax, Outreach, insidesales.com etc), which automates part of the process but also generates more noise and competition. Tasks that used to be more manual and higher friction are automated and easier, which leads to more people jumping in.

The result is that it makes everyone better. You and all your competitors understand your/their acquisition and retention bottlenecks. Everyone has an equal, data-driven shot at improving LTV, and as a corollary can spend more on ads.

#5 Smarter and faster competitors
It used to be that startups could count on their competitors to be big, dumb, and slow. Not anymore. We’ve all gotten smarter and faster, and that includes your competitors. It used to be that you could wait a few years before competitors would respond. Now the Facebooks, Hubspots and Salesforces of the world can and will copy you right away.

Most famously, we’ve seen Facebook fast follow Snap within their Messenger, Instagram, Whatsapp and core product:

But it’s not just consumer where this is happening:

  • Dropbox <> Google Drive
  • Slack <> Microsoft Teams
  • YesWare <> Hubspot Sales

… and many more examples too.

#6 Competing with boredom is easier than competing with Facebook + Google
When the App Store first launched, competition was easy: Boredom. Mobile app developers were taking time away from easy, ‘idle’ activities like waiting in line, commuting etc. But today, acquiring a new app user means stealing a user’s time from their favorite existing app.

As we’re near the end of the cycle, companies have moved from non-zero sum to a zero-sum competition.

Instead of competing with boredom, we’re now competing with Silicon Valley’s top tech companies, who already have all your users (back to number 2 above). This also applies to the consumerized workplace, where new entrants will be competing to steal users’ time from Slack, Dropbox and other favorite apps. This is much, much harder because the incumbents have pretty great products! And proven distribution models to respond if needed.

How the industry is evolving, in response
The above trends are troubling for new products, and especially for startups. All 6 of these trends are scary, and they’ve emerged because we’re at the end of a cycle. There’s a variety of natural monopolistic trends (like app stores, ad platforms, etc), where everything with related to growth and traction is getting harder.

If companies want to stay in the mobile/software product categories, they need to evolve their strategies. I’ll save a deeper discussion for a future essay, but here are some observations on what’s happening:

  1. More money diverted to paid acquisition
  2. Deeper monetization to open up channels – especially paid
  3. Creation of paid referral programs to complement ad buying
  4. Personalization features that rely on lots of data to amp up targeting
  5. Products trying to deepen differentiation by solving hard(er) problems/tech

There seems to be a deepening in both monetization, differentiation, and personalization to help open up growth. This happens by solving more fundamental customer problems – especially those that help generate real $ value for people – but also helps open up paid channels, whether that’s advertising, referrals, or promos.

More discussion on this in a future writeup!

The post Why getting traction is so hard these days appeared first on andrewchen.

Read the whole story
seriousben
53 days ago
reply
Well researched trends explaining why traction is hard to get these days.
Canada
Share this story
Delete

What Kind of Dog Is It – Using TensorFlow on a Mobile Device

1 Comment

Article URL: https://jeffxtang.github.io/deep/learning,/tensorflow,/mobile,/ai/2016/09/23/mobile-tensorflow.html

Comments URL: https://news.ycombinator.com/item?id=14609472

Points: 13

# Comments: 3

Read the whole story
seriousben
58 days ago
reply
Interesting guide. Opens the door to lots of projects.
Canada
Share this story
Delete

Rust as a gateway drug to Haskell

1 Comment

Article URL: http://xion.io/post/programming/rust-into-haskell.html

Comments URL: https://news.ycombinator.com/item?id=14550606

Points: 242

# Comments: 92

Read the whole story
seriousben
62 days ago
reply
Interesting comparison between rust and Haskell
Canada
Share this story
Delete

Go, without package scoped variables

1 Comment and 3 Shares

This is a thought experiment, what would Go look like if we could no longer declare variables at the package level? What would be the impact of removing package scoped variable declarations, and what could we learn about the design of Go programs?

I’m only talking about expunging var, the other five top level declarations would still be permitted as they are effectively constant at compile time. You can, of course, continue to declare variables at the function or block scope.

Why are package scoped variables bad?

But first, why are package scoped variables bad? Putting aside the problem of globally visible mutable state in a heavily concurrent language, package scoped variables are fundamentally singletons, used to smuggle state between unrelated concerns, encourage tight coupling and makes the code that relies on them hard to test.

As Peter Bourgon wrote recently:

tl;dr: magic is bad; global state is magic → [therefore, you want] no package level vars; no func init.

Removing package scoped variables, in practice

To put this idea to the test I surveyed the most popular Go code base in existence; the standard library, to see how package scoped variables were used, and assessed the effect applying this experiment would have.

Errors

One of the most frequent uses of public package level var declarations are errors; io.EOF,
sql.ErrNoRowscrypto/x509.ErrUnsupportedAlgorithm, and so on. Removing the use of package scoped variables would remove the ability to use public variables for sentinel error values. But what could be used to replace them?

I’ve written previously that you should prefer behaviour over type or identity when inspecting errors. Where that isn’t possible, declaring error constants removes the potential for modification which retaining their identity semantics.

The remaining error variables are private declarations which give a symbolic name to an error message. These error values are unexported so they cannot be used for comparison by callers outside the package. Declaring them at the package level, rather than at the point they occur inside a function negates the opportunity to add additional context to the error. Instead I recommend using something like pkg/errors to capture a stack trace at the point the error occurs.

Registration

A registration pattern is followed by several packages in the standard library such as net/http, database/sql, flag, and to a lesser extent log. It commonly involves a package scoped private map or struct which is mutated by a public function—a textbook singleton.

Not being able to create a package scoped placeholder for this state would remove the side effects in the image, database/sql, and crypto packages to register image decoders, database drivers and cryptographic schemes. However, this is precisely the magic that Peter is referring to–importing a package for the side effect of changing some global state of your program is truly spooky action at a distance.

Registration also promotes duplicated business logic. The net/http/pprof package registers itself, via a side effect with net/http.DefaultServeMux, which is both a potential security issue—other code cannot use the default mux without exposing the pprof endpoints—and makes it difficult to convince the net/http/pprof package to register its handlers with another mux.

If package scoped variables were no longer used, packages like net/http/pprof could provide a function that registers routes on a supplied http.ServeMux, rather than relying on side effects to altering global state.

Removing the ability to apply the registry pattern would also solve the issues encountered when multiple copies of the same package are imported in the final binary and try to register themselves during startup.

Interface satisfaction assertions

The interface satisfaction idiom

var _ SomeInterface = new(SomeType)

occurred at least 19 times in the standard library. In my opinion these assertions are tests. They don’t need to be compiled, only to be eliminated, every time you build your package. Instead they should be moved to the corresponding _test.go file. But if we’re prohibiting package scoped variables, this prohibition also applies to tests, so how can we keep this test?

On option is to move the declaration from package scope to function scope, which will still fail to compile if SomeType stop implementing SomeInterface

func TestSomeTypeImplementsSomeInterface(t *testing.T) {
       // won't compile if SomeType does not implement SomeInterface
       var _ SomeInterface = new(SomeType)
}

But, as this is actually a test, it’s not hard to rewrite this idiom as a standard Go test.

func TestSomeTypeImplementsSomeInterface(t *testing.T) {
       var i interface{} = new(SomeType)
       if _, ok := i.(SomeInterface); !ok {
               t.Fatalf("expected %t to implement SomeInterface", i)
       }
}

As a site note, because the spec says that assignment to the blank identifier must fully evaluate the right hand side of the expression, there are probably a few suspicious package level initialisation constructs hidden in those var declarations.

It’s not all beer and skittles

The previous sections showed that avoiding package scoped variables might be possible, but there are some areas of the standard library which have proved more difficult to apply this idea.

Real singletons

While I think that the singleton pattern is generally overplayed, especially in its registration form, there are always some real singleton values in every program. A good example of this is  os.Stdout and friends.

package os 

var (
        Stdin  = NewFile(uintptr(syscall.Stdin), "/dev/stdin")
        Stdout = NewFile(uintptr(syscall.Stdout), "/dev/stdout")
        Stderr = NewFile(uintptr(syscall.Stderr), "/dev/stderr")
)

There are a few problems with this declaration. Firstly Stdin, Stdout, and Stderr are of type *os.File, not their respective io.Reader or io.Writer interfaces. This makes replacing them with alternatives problematic. However the notion of replacing them is exactly the king of magic that this experiment seeks to avoid.

As the previous constant error example showed, we can retain the singleton nature of the standard IO file descriptors, such that packages like log and fmt can address them directly, but avoid declaring them as mutable public variables with something like this:

package main

import (
        "fmt"
        "syscall"
)

type readfd int

func (r readfd) Read(buf []byte) (int, error) {
        return syscall.Read(int(r), buf)
}

type writefd int

func (w writefd) Write(buf []byte) (int, error) {
        return syscall.Write(int(w), buf)
}

const (
        Stdin  = readfd(0)
        Stdout = writefd(1)
        Stderr = writefd(2)
)

func main() {
        fmt.Fprintf(Stdout, "Hello world")
}

Caches

The second most common use of unexported package scoped variables are caches. These come in two forms; real caches made out of maps (see the registration pattern above) and sync.Pool, and quasi constant variables that ameliorate the cost of a compilation.

As an example the crypto/ecsda package has a zr type whose Read method zeros any buffer passed to it. The package keeps a single instance of zr around because it is embedded in other structs as an io.Reader, potentially escaping to the heap each time it is instantiated.

package ecdsa 

type zr struct {
        io.Reader
}

// Read replaces the contents of dst with zeros.
func (z *zr) Read(dst []byte) (n int, err error) {
        for i := range dst {
                dst[i] = 0
        }
        return len(dst), nil
}

var zeroReader = &zr{}

However zr doesn’t embed an io.Reader, it is an io.Reader, so the unused zr.Reader field could be eliminated, giving zr a width of zero. In my testing this modified type can be created directly where it is used without performance regression.

        csprng := cipher.StreamReader{
                R: zr{},
                S: cipher.NewCTR(block, []byte(aesIV)),
        }

Perhaps some of the caching decision could be revisited as the inlining and escape analysis options available to the compiler have improved significantly since the standard library was first written.

Tables

The last major use of  common use of private package scoped variables is for tables, as seen in the unicode, crypto/*, and math packages. These tables either encode constant data in the form of arrays of integer types, or less commonly simple structs and maps.

Replacing package scoped variables with constants would require a language change along the lines of #20443. So, fundamentally, providing there is no way to modify those tables at run time, they are probably a reasonable exception to this proposal.

A bridge too far

Even though this post was just a thought experiment, it’s clear that forbidding all package scoped variables is too draconian to be workable as a language precept. Addressing the bespoke uses of private var usage may prove impractical from a performance standpoint, would be akin to pinning a “kick me” sign to ones back and inviting all the Go haters to take a free swing.

However, I believe there are a few concrete recommendations that can be drawn from this exercise, without going to the extreme of changing the language spec.

  • Firstly, public var declarations should be eschewed. This is not a controversial conclusion and not one that is unique to Go. The singleton pattern is discouraged, and an unadorned public variable that can be changed at any time by any party that knows its name should be a design, and concurrency, red flag.
  • Secondly, where public package var declarations are used, the type of those variables should be carefully constructed to expose as little surface area as possible. It should not be the default to take a type expected to be used on a per instance basis, and assign it to a package scoped variable.

Private variable declarations are more nuanced, but certain patterns can be observed:

  • Private variables with public setters, which I labelled registries, have the same effect on the overall program design as their public counterparts. Rather than registering dependencies globally, they should instead be passed in during declaration using a constructor function, compact literal, config structure, or option function.
  • Caches of []byte vars can often be expressed as consts at no performance cost.  Don’t forget the compiler is pretty good at avoiding string([]byte) conversions where they don’t escape the function call.
  • Private variables that hold tables, like the unicode package, are an unavoidable consequence of the lack of a constant array type. As long as they are unexported, and do not expose any way to mutate them, they can be considered effectively constant for the purpose of this discussion.

The bottom line; think long and hard about adding package scoped variables that are mutated during the operation of your program. It may be a sign that you’ve introduced magic global state.

Read the whole story
seriousben
69 days ago
reply
Maybe some new best practices for golang packages. I find these tricky to implement coming from the node world but I see the advantages of it.
Canada
Share this story
Delete
Next Page of Stories