ALU

Video

Video: We asked the ‘heads-down crowd’ what was on their phones

Reading Time: 2 minutes These mobile users were busy online-shopping, booking hotels and reading about celebrities. Original Link

Video: How in-ear translator WT2 crosses borders

Typical customers include business users and couples who want to better communicate with international in-laws. Original Link

Webinar #3: Product Backlog Anti-Patterns [Video]

The third Hands-on Agile webinar on product backlog anti-patterns covers common problems from out-dated and oversized tickets to the part-time proxy product owner and his or her idea repository.

Learn more about the numerous anti-patterns that can manifest themselves when you try to create value for your customers and your organization:

Original Link

Webinar #8: Scrum Master Anti-Patterns [Video]

The eighth Hands-on Agile webinar Scrum Master Anti-Patterns addresses twelve anti-patterns of your Scrum Master—from ill-suited personal traits and the pursuit of individual agendas to frustration with the team itself.

The video of the webinar is available now:

Original Link

A walk down TechCrunch Shenzhen’s Startup Alley

Take a walk with us down Startup Alley, a high-tech gathering of dog bones, skin scanners and skateboards. Original Link

Into the Depths: The Technical Details Behind AV1

AV1, the next generation royalty-free video codec from the Alliance for Open Media, is making waves in the broadcasting industry.

Firefox, Coney & Mozilla's AV1 team at IBC 2018

Since AOMedia officially cemented the AV1 v1.0.0 specification earlier this year, we’ve seen increasing interest from the broadcasting industry. Starting with the NAB Show (National Association of Broadcasters) in Las Vegas earlier this year, and gaining momentum through IBC (International Broadcasting Convention) in Amsterdam, and more recently the NAB East Show in New York, AV1 keeps picking up steam. Each of these industry events attract over 100,000 media professionals. Mozilla attended these shows to demonstrate AV1 playback in Firefox, and showed that AV1 is well on its way to being broadly adopted in web browsers.

Continuing to advocate for AV1 in the broadcast space, Nathan Egge from Mozilla dives into the depths of AV1 at the Mile High Video Workshop in Denver, sponsored by Comcast.

AV1 leapfrogs the performance of VP9 and HEVC, making it a next-generation codec. The AV1 format is and will always be royalty-free with a permissive FOSS license.

The post Into the Depths: The Technical Details Behind AV1 appeared first on Mozilla Hacks – the Web developer blog.

Original Link

Getting Started With Kotlin and Maven [Video]

When discussing Kotlin and Maven, conversations always start with the boring yet utterly important question: how do you create a Kotlin project? In this episode, we will look at how to create one from scratch with Maven, including a small Hello World example at the end. This video is highly recommended for all future episodes. Enjoy!

Original Link

Building Your Own Docker Images [Video]

To get anything out of Docker, you must know how to build your own Docker images, so that you can, later on, deploy your Java application to it. In this quick and practical episode you will learn how to do so.

Original Link

Introduction to Unit Tests in Java Using JUnit5 [Video]

Creating Unit Tests in Java using JUnit5 (Part 1)

 Creating Unit Tests in Java Using JUnit5 (Part 2)

Original Link

What’s New in Java 11? [Video]

Yes, Java 11 is out! But, does it have any new exciting features that are useful in day-to-day work life? Let’s find out in this short and practical screencast.

Original Link

TechNode Reviews the OnePlus 6

Does this cost-conscious premium phone match up with its higher-end competitors? Original Link

Making Scala Faster: Three Expert Tips for Busy Dev Teams [Video and Slides]

How To Make Your Scala Applications Compile Faster

With Scala, JVM developers get a host of benefits over other programming languages. From code conciseness (fewer LOC) and native scalability to support for functional programming paradigms and type safety, Scala is the language of choice for modern enterprises like Amazon, HPE, PayPal, and Walmart.

Original Link

Getting Started With Docker and Java [Video]

Docker is gaining a lot of hype these days. But, why would you want to use Docker with Java in the first place? Before you learn the answer to that question, you obviously need to setup Docker first. In this short and practical episode, you’ll learn how to install Docker on your machine and then finish with a small ‘hello-world’ achievement.

Original Link

Shift Developer Conference 2018 — How to Jump Start a Career in Open Source (Video)


As previously posted, I spent this week at the largest developer conference in Southeast Europe, known as the Shift Developer Conference 2018.

I gave a talk on the soft skill side of development, suggesting some ways to jump-start a career in open source. I did not mention coding, pull requests or even suggest to join a coding project. It’s more subtle than the obvious components one would expect in such a topic.

Original Link

The Video Wars of 2027

Author’s Note: This post imagines a dystopian future for web video, if we continue to rely on patented codecs to transmit media files. What if one company had a perpetual monopoly on those patents? How could it limit our access to media and culture? The premise of this cautionary tale is grounded in fact. However, the future scenario is fiction, and the entities and events portrayed are not intended to represent real people, companies, or events.

Illustration by James Dybvig

The year is 2029. It’s been two years since the start of the Video Wars, and there’s no end in sight. It’s hard to believe how deranged things have become on earth. People are going crazy because they can’t afford web video fees – and there’s not much else to do. The world’s media giants have irrevocably twisted laws and governments to protect their incredibly lucrative franchise: the right to own their intellectual property for all time.

It all started decades ago, with an arcane compression technology and a cartoon mouse. As if we needed any more proof that truth is stranger than fiction.

Adulteration of the U.S. Legal System

In 1998, the U.S. Congress passed the Sonny Bono Copyright Term Extension Act. This new law extended copyrights on corporate works to the author’s lifetime plus 95 years. The effort was driven by the Walt Disney Company, to protect its lucrative retail franchise around the animated character Mickey Mouse. Without this extension, Mickey would have entered the public domain, meaning anyone could create new cartoons and merchandise without fear of being sued by Disney. When the extension passed, it gave Disney another 20 years to profit from Mickey. The news sparked outrage from lawyers and academics at the time, but it was a dull and complex topic that most people didn’t understand or care about.

In 2020, Disney again lobbied to extend the law, so its copyright would last for 10,000 years. Its monopoly on our culture was complete. No art, music, video, or story would pass into the public domain for millennia. All copyrighted ideas would remain the private property of corporations. The quiet strangulation of our collective creativity had begun.

A small but powerful corporate collective called MalCorp took note of Disney’s success. Backed by deep-pocketed investors, MalCorp had quietly started buying the technology patents that made video streaming work over the internet. It revealed itself in 2021 as a protector of innovation. But its true goal was to create a monopoly on video streaming technology that would last forever, to shunt profits to its already wealthy investors. It was purely an instrument of greed.

Better Compression for Free

Now, there were some good guys in this story. As early as 2007, prescient tech companies wanted the web platform to remain free and open to all – especially for video. Companies like Cisco, Mozilla, Google, and others worked on new video codecs that could replace the patented, ubiquitous H.264 codec. They even combined their efforts in 2015 to create a royalty-free codec called AV1 that anyone could use free of charge.

AV1 was notable in that it offered better compression, and therefore better video quality, than any other codec of its time. But just as the free contender was getting off the ground, the video streaming industry was thrown into turmoil. Browser companies backed different codecs, and the market fragmented. Adoption stalled, and for years the streaming industry continued paying licensing fees for subpar codecs, even though better options were available.

The End of Shared Innovation

Meanwhile MalCorp found a way to tweak the law so its patents would never expire. It proposed a special amendment, just for patent pools, that said: Any time any part of any patent changes, the entire pool is treated as a new invention under U.S. law. With its deep pockets, MalCorp was able to buy the votes needed to get its law passed.

MalCorp’s patents would not expire. Not in 20 years. Not ever. And because patent law is about as interesting as copyright law, few protested the change.

Things went downhill quickly for advocates of the open web. MalCorp’s patents became broader, vaguer, ever-changing. With billions in its war chest, MalCorp was able to sue royalty-free codecs like AV1 out of existence. MalCorp had won. It had a monopoly on web streaming technology. It began, slowly at first, to raise licensing fees.

Gorgeous Video, Crushing Fees

For those who could afford it, web video got much better. MalCorp’s newest high-efficiency video codecs brought pixel-perfect 32K-Strato-Def images and 3D sound into people’s homes. Video and audio were clear and rich – better than real life. Downloads were fast. Images were crisp and spectacular. Fees were high.

Without access to any competing technologies, streaming companies had to pay billions instead of millions a year to MalCorp. Streaming services had to 100x their prices to cover their costs. Monthly fees rose to $4,500. Even students had to pay $50 a minute to watch a lecture on YouTube. Gradually, the world began to wake up to what MalCorp had done.

Life Indoors

By the mid-twenties, the Robotic Age had put most people out of work. The lucky ones lived on fixed incomes, paid by their governments. Humans were only needed for specialized service jobs, like nursery school teachers and style consultants. Even doctors were automated, using up-to-the-minute, crowd-sourced data to diagnose disease and track trends and outbreaks.

People were idle. Discontent was rising. Where once a retired workforce might have traveled or pursued hobbies, growing environmental problems rendered the outside world mostly uninhabitable. People hiked at home with their headsets on, enjoying stereoscopic birdsong and the idea of a fresh breeze. We lived indoors, in front of screens.

Locked In, Locked Out

It didn’t take long for MalCorp to become the most powerful corporation in the world. When video and mixed reality files made up 90 percent of all internet traffic, MalCorp was collecting on every transmission. Still, its greed kept growing.

Fed up with workarounds like piracy sites and peer-to-peer networks, MalCorp dismantled all legacy codecs. The slow, furry, lousy videos that were vaguely affordable ceased to function on modern networks and devices. People noticed when the signal went dark. Sure, there was still television and solid state media, but it wasn’t the same. Soon enough, all hell broke loose.

The Wars Begin

During Super Bowl LXII, football fans firebombed police stations in 70 cities, because listening to the game on radio just didn’t cut it. Thousands died in the riots and, later, in the crackdowns. Protesters picketed Disneyland, because the people had finally figured out what had happened to their democracy, and how it got started.

For the first time in years, people began to organize. They joined chat rooms and formed political parties like VidPeace and YouStream, vying for a majority. They had one demand: Give us back free video on the open web. They put banners on their vid-free Facebook feeds, advocating for the liberation of web video from greedy patent holders. They rallied around an inalienable right, once taken for granted, to be able to make and watch and share their own family movies, without paying MalCorp’s fees.

But it was too late. The opportunity to influence the chain of events had ended years before. Some say the tipping point was in 2019. Others blame the apathy and naiveté of early web users, who assumed tech companies and governments would always make decisions that served the common good. That capitalism would deliver the best services, in spite of powerful profit motives. And that the internet would always be free.

Original Link

Testing in Production the Netflix Way [Video]

In June we focused our Test in Production Meetup around chaos engineering. Nora Jones, Senior Software Engineer at Netflix, kicked off the evening with a talk about how Netflix tests in production.

“Chaos engineering…is the discipline of experimenting on production to find vulnerabilities in the system before they render it unusable for your customers. We do this at Netflix through a tool that we call ChAP…[It] can catch vulnerabilities, and allows users to inject failures into services and prod that validate their assumptions about those services before they become full-blown outages.”

Watch her talk below to learn more about how her team helps engineers across Netflix to safely test in production and proactively catch vulnerabilities within their systems. If you’re interested in joining us at a future Meetup, you can sign up here.

Transcript

I’m super excited to be here today. Netflix is a huge fan of testing in production. We do it through chaos engineering, and we’ve recently renamed our team to Resilience Engineering because while we go chaos engineering still, chaos engineering is one means to an end to get you to that overall resilience story. I’m going to talk a little bit about that today.

Our goal as a team is to improve availability by proactively finding vulnerabilities in services, and we do that by experimenting on the production system. Our team has an active belief that is a certain class of vulnerabilities and issues that you can only find with live production traffic. I’m going to talk to you a little bit about how we do that today.

First and foremost, our focuses with testing in production are safety and monitoring. You really can’t have great testing and production unless you have these things in place. And testing in production can seem really scary, and if it does seem scary in your company, you should listen to that voice and figure out why it seems scary. It might be because you don’t have a good safety story. It might be because you don’t have a good observability story. We really focus on these two worlds within Netflix and within our tools.

To define chaos engineering just in a simple sentence, it’s the discipline of experimenting on production to find vulnerabilities in the system before they render it unusable for your customers. We do this at Netflix through a tool that we call ChAP, which stands for Chaos Automation Platform. ChAP can catch vulnerabilities, and it allows users to inject failures into services and prod that validate their assumptions about those services before they become full-blown outages.

I’m going to take you through how it works at a high level. This is a hypothetical set of microservice dependencies. There’s a proxy. It sends request to service A, which fans out to service B, C, and D, and then there’s also a persistence layer. Service D talks to Cassandra, and then service B talks to a cache.

I went ahead and condensed this, because it’s about to get busy in a second. We want to see if service D is resilient to the failure of a cache. The user goes into the ChAP interface, and they select service D as a service that will observe the failures in a cache as a service that fails. ChAP will actually go ahead and clone service B into two replicas. We refer to them as the control in the experiment clusters, and it kind of works like AB testing or like a sticky canary. These are much smaller in size than service B. We only route a very, very small percentage of customers into these clusters because obviously we want to contain the blast radius. We calculate that percentage based on the current number of users currently streaming, currently using the service.

It will then instruct our failure injection testing to tag these requests that match our criteria. It does this by adding information to the header of that request. It creates two sets of tags. One set will have instructions to both fail and be routed to the canary, and then the other will have instructions just to be routed to the control.

When the RPC client and service A sees in the instructions that it needs to route a request, it will actually send them to the control or the experiment cluster. And then once failure injection testing in the RPC layer of the experiment cluster sees that the request has been tagged for failure, it will then return the failed response. As before, the experiment cluster will see that as a failed response from the cache it will execute the code to then handle a failure. We’re doing this with the assumption that this is resilient to failure, right? But what we see sometimes is that that’s not always the case. From the point of view of service A, it looks like everything is actually behaving normally.

How do we monitor this while these chaos experiments are running because it has the potential to go very poorly. When Netflix started our chaos engineering story, we didn’t have good gates in place. We would run a failure experiment, cross our fingers, and then all sit in a war room watching the grass and making sure that nothing actually went incorrectly. Now, we have much more of a safety focus.

We look at a lot of our key business metrics at Netflix. One of our key business metrics is what we SPS, or stream starts per second. If you think about what is the most important thing to the business of Netflix, it’s that a customer can watch Friends or The Office or whatever they want to watch whenever they want to watch it.

What you see in these graphs here are an actual experiment, and it shows the SPS difference between the experiment and control during a chaos experiment. You can see here that these are deviating a lot from each other, which shouldn’t be the case because there’s the same percentage of traffic routed to both clusters.

Because of that, the experiment will use automated canary analysis and see wow, these deviated really far from each other. I’m going to short the experiment. I’m going to stop failing these requests for the customer, and they’ll have a normal experience. From a customer perspective, it’s more seen as a blip when something like this happens.

We have a bunch of other protections in place as well. We limit the amount of traffic that’s impacted in each region so we’re not just only doing experiments in U.S. West 2. We’re doing them all over the place and limiting the amount of experiments that can run in a region at a time. We’re only running during business hours so we’re not paging engineers and waking them up if something goes wrong. If a test fails, it can actually not be automatically run again or picked up by anyone until someone actually explicitly manually resolves it and acknowledges hey, I know this failed, but I fixed whatever needed to be fixed.

We also have the ability to apply custom fast properties to clusters, which is helpful if your service is sharded, which a lot of services are at Netflix. Additionally, and I don’t have this as a bullet point, we also have the ability to fail based on device. If we’re assuming that Apple or a certain type of television is having a bunch of issues, we can limit it to that device specifically and see if that issue is widespread across that device.

ChAP has found a lot of vulnerabilities. Here’s some examples. This is one of my favorite ones. The user says, “We ran a ChAP experiment which verifies the service’s fallback path works, which was crucial for our availability, and it successfully caught an issue in the fallback path and the issue was resolved before it resulted in availability incident.” This is a really interesting one, because this fallback path wasn’t getting executed a lot, so the user didn’t actually know if it was working properly, and we were able to simulate. We were able to actually make it fail and see if it went to the fallback path and the fallback path worked properly. In this case, the user thought their service was noncritical or tier two or whatever you label it as, but really it actually was a critical service.

Here’s another example. We ran an experiment to reproduce a signup flow fallback issue that happened with certain deploys intermittently at night. Something kind of weird was happening with their service. We were able to reproduce the issue by injecting 500 milliseconds of latency. By doing the experiment, we were able to find the issues in the log file that was uploaded to the Big Data Portal. This helped build context into why Signup fallback experiences served during certain pushes. That fallback experience kept happening, but these users didn’t know why. And they actually ran a ChAP experiment to see when it was happening and to see why it was happening.

To set up ChAP experiments, there’s a lot of things the user needs to go through. They need to figure out what injection points they can use. Our teams had to decide if they wanted failure or latency. These are all of our injection points. You can fail Cassandra, Hystrix which is our fallback layer, RPC Service, RPC Client, S3, SQS, or our cache, or they can add latency. Or you can add both. And you can actually come up with combos of different experiments.

What would happen is we would meet with service teams and we’d sit in a room together, and we’d try to come up with a good experiment. It would take a really long time. When we were setting up the experiment too you have to decide your ACA configurations, or your automatic canary configurations.

We had some canned ACAs set up. We had a ChAP SPS one. We had one that looked at system metrics. We had one that looked at RPS successes and failures. We had one that looked at whether our service was actually working properly and injecting failures, and we learned that experiment creation can be really, really time-consuming, and it was. Not a lot of experiments were getting created. It was hard for a human to actually hold all the things in their head that made a good experiment. We decided to automate some of this from ChAP. We were looking at things like, who was calling who? We were looking at timeout files. We were looking at retries, and we figured out that all of that information was in a lot of different places. We decided to aggregate it.

We zoomed into ChAP, and we got cute and we gave it a monocle, and the Monocle provides crucial optics on services. This is what Monocle looks like. It has the ability for someone to look up their app and their cluster and they can see all this information in one place. Each row represents a dependency, and this dependency is what feeds into chaos experiments.

We were using this to come up with experiments, but what we didn’t realize was this information was actually useful to just have in one place as well, so that was an interesting side effect. Users can come here and actually see if there are anti-patterns associated with their service like if they had a dependency that was not supposed to be critical but didn’t have a fallback. Obviously, it was critical now. People could see timeout discrepancies. People could see retry discrepancies. We use this information to score a certain type of experiment’s criticality and fed that into an algorithm that determined prioritization.

Each row represents a dependency, and they can actually expand the rows. Here’s an interesting example. That blue line represents someone’s timeout, and the purple line represents how much time it was actually taking most of the time. You can see it is very, very far away from the timeout. But a lot of this information wasn’t readily accessible. What would happen if we did a chaos experiment just under the timeout. You know? Is that going to pass? It never executes that high. It’s an interesting question. We’re trying to provide this level of detail to users before these chaos experiments get run to give them the opportunity to say, “Wait, this doesn’t look right.”

I’m going to play a little game. I know a lot of you don’t have contacts on the Netflix ecosystem, but there’s a vulnerability in this service, and I want to see if you can spot it. Take a second to look at it. To give you some context, sample remote Hystrix command wraps both the sample-rest-client and the sample-rest-client.GET. The Hystrix timeout is set to 500 milliseconds. Sample-rest-client.GET has a timeout of 200 with one retry, and this is fine because it’s a total of 400 milliseconds with exponential backoff, which is within that Hystrix limit. The sample retry client has timeouts of 100 and 600 with one retry.

In this case, the retry might not have a chance to complete given the surrounding Hystrix wrapper timeout, which means that Hystrix abandons the request before the RPC has a chance to return. That’s where the vulnerability lies. We’re actually providing this information to users, and what’s interesting is a lot of this logic lies in different places. They weren’t able to have this level of insight before. Those were okay, and this is where the vulnerability lies.

Why did this happen? It’s easy for a team to go in and look at their conflict file and just change this surround, right? But we want to figure out why this happened. We can change the timeout, but who’s to say this won’t happen again? We also help with figuring out why these things happen. Engineers weren’t making bad choices, it was just a lot of things to update at once. That’s something to be learned as well.

We use Monocle for automatic experiment creation as well. A user creates an experiment based on infactorial types of inputs. We take all these things, and we’re working to automate the creation of running these experiments so that users don’t have to. We’re automatically creating and prioritizing latency failure and latency-causing failure RPC and Hystrix experiments. ACA configs are added by default. The deviation configurations. We have SPC, system metrics, request statistics, and experiments are automatically run as well. Prioritizing experiments are also created. I’ll go through the algorithm for that a high level. We use an RPS stats range bucket. We use a number of retries and the number of Hystrix commands associated with it. These are all weighted appropriately.

Something else we’ve also taken into account is the number of commands without fallbacks and any curated impacts that a customer adds to their dependency. Curated impacts is this has a known impact on login. This has a known impact on signup. This has a known impact on SPS. And we actually weigh these negatively and don’t run the experiments if the score is negative. Test cases are then ranked and run according to their criticality score. The higher the score, the sooner it’s run, the more often it’s run.

Ironically enough, Monocle has given us some feedback that allows us to run less experiments in production. Right? It’s ended up as a feedback loop because we’ve been running so many experiments we’ve seen patterns in between them where we can look at certain configuration files now and see certain anti-patterns and know that that’s actually going to cause a failure, whereas we didn’t know that information before.

It has led to new safety. Before if an experiment failed, it needed to be marked as resolved. Currently, it needs to be marked as resolved before it can run again. But now we can explicitly add curated impacts to a dependency. A user can go into their Monocle and actually add this has a known login impact. This has a known SPC impact. And we’re working on a feedback loop to where it fails, it will add a curated impact as well. The runner will not run experiments with known impacts.

In summary, ChAP’s Monocle is crucial optics in one place, automatically generated experiments, automatically prioritized experiments, and finding vulnerabilities before they become full-blown outages. If I can leave you with one tangent, one side piece of advice, it’s to remember why you’re doing chaos experiments and why you’re testing in production. It’s to understand how customers are using your service and not lose sight of them. You want them to have the best experience possible. So, monitoring and safety are of utmost importance in these situations. Like at Netflix, not being able to stream a video. Thank you. Appreciate it.

Original Link

Continuous Discussions Podcast, Episode 89: The DevOps Toolchain [Podcast]

I just took part in a great podcast hosted by Electric Cloud in partnership with DZone with a round-table discussion on the importance of a healthy DevOps toolchain.

Participants in the podcast included:

Prashant MohanPrashant Mohan, Product Manager for SmartBear’s software testing tools. @Prashz91

Lee AtchisonLee Atchison, Senior Director, Strategic Architecture at New Relic. @leeatchison

Ian BuchananIan Buchanan, Developer Advocate at Atlassian. @devpartisan

Mark MillerMark Miller, DevOps Evangelist at Sonatype. @EUSP | alldaydevops.com

Ravi GadhiaRavi Gadhia, Senior Solutions Engineer at GitHub.

Our hosts were:

Image titleAnders Wallgren, CTO at Electric Cloud. @anders_wallgren

and

Image title

Sam Fell, V.P. Marketing at Electric Cloud. @samueldfell

Key takeaways include:

The DevOps Toolchain as a Value Stream

It’s important to look at the entire value stream looking at the lead time ladder and identifying bottlenecks where automation will add the greatest value.

You need to be able to see upstream and downstream to ensure you are adding value and not duplicating effort. Have a single set of integrated tools giving you vision into how the entire process is working. 

Think about what the code is seeing and how it’s being affected.

Use Tools to Align People and Teams 

Given that the greatest challenge to implementing a DevOps methodology is culture, it’s important for every member of the team to have visibility and access to metrics for every link in the toolchain. 

DevOps is enabling cultural transformation while security overlays the entire process.

Value stream mapping is the biggest hurdle companies need to get over. Once you understand the whole process and the outcomes you are trying to achieve every member of the team is able to see where they are contributing and where they are hindering the process.

Is There One Right Tool?

No; however, there’s a correct set of tools for every organization and you determine this by choosing the right tool for the problem you are working to solve. Every tool has a purpose and it must integrate with every other tool and provide a holistic view of the process. There is value in standardization; however, be aware that the right tool might change. As such, tools need to be able for clients to change in and out.

The tools have to work together. You cannot expect companies to “rip and replace.” It’s too expensive, time-consuming, and there are a lot of people to be trained on the new tool.

Adapting to the Changing Environment

We need to be able to abstract out the details of the actual tools and focus on the outcome of the process versus the tool that’s being used to achieve the outcome.

Tools must integrate into a platform that’s easy and intuitive for team members to use. The tools need to provide access to data and vision into the entire pipeline for audit-ability and traceability. 

Think holistically about the entire DevOps toolchain.

Check out the full episode:

You can find Electric Cloud’s write-up of the podcast here, and check out the previous episodes of Continuous Discussions (#c9d9).

Original Link

AV1: next generation video – The Constrained Directional Enhancement Filter

AV1

For those just joining us….
AV1 is a new general-purpose video codec developed by the Alliance for Open Media. The alliance began development of the new codec using Google’s VPX codecs, Cisco’s Thor codec, and Mozilla’s/Xiph.Org’s Daala codec as a starting point. AV1 leapfrogs the performance of VP9 and HEVC, making it a next-next-generation codec . The AV1 format is and will always be royalty-free with a permissive FOSS license.

This post was written originally as the second in an in-depth series of posts exploring AV1 and the underlying new technologies deployed for the first time in a production codec. An earlier post on the Xiph.org website looked at the Chroma from Luma prediction feature. Today we cover the Constrained Directional Enhancement Filter. If you’ve always wondered what goes into writing a codec, buckle your seat-belts, and prepare to be educated!

Filtering in AV1

Virtually all video codecs use enhancement filters to improve subjective output quality.

By ‘enhancement filters’ I mean techniques that do not necessarily encode image information or improve objective coding efficiency, but make the output look better in some way. Enhancement filters must be used carefully because they tend to lose some information, and for that reason they’re occasionally dismissed as a deceptive cheat used to make the output quality look better than it really is.

But that’s not fair. Enhancement filters are designed to mitigate or eliminate specific artifacts to which objective metrics are blind, but are obvious to the human eye. And even if filtering is a form of cheating, a good video codec needs all the practical, effective cheats it can deploy.

Filters are divided into multiple categories. First, filters can be normative or non-normative. A normative filter is a required part of the codec; it’s not possible to decode the video correctly without it. A non-normative filter is optional.

Second, filters are divided according to where they’re applied. There are preprocessing filters, applied to the input before coding begins, postprocessing filters applied to the output after decoding is complete, and in-loop or just loop filters that are an integrated part of the encoding process in the encoding loop. Preprocessing and postprocessing filters are usually non-normative and external to a codec. Loop filters are normative almost by definition and part of the codec itself; they’re used in the coding optimization process, and applied to the reference frames stored or inter-frame coding.

a diagram of AV1 coding loop filters AV1 uses three normative enhancement filters in the coding loop. The first, the deblocking filter, does what it says; it removes obvious bordering artifacts at the edges of coded blocks. Although the DCT is relatively well suited to compacting energy in natural images, it still tends to concentrate error at block edges. Remember that eliminating this blocking tendency was a major reason Daala used a lapped transform, however AV1 is a more traditional codec with hard block edges. As a result, it needs a traditional deblocking filter to smooth the block edge artifacts away.

An example of blocking artifacts in a traditional DCT block-based codec. Errors at the edges of blocks are particularly noticeable as they form hard edges. Worse, the DCT (and other transforms in the DCT family) tend to concentrate error at block edges, compounding the problem.

The last of the three filters is the Loop Restoration filter. It consists of two configurable and switchable filters, a Wiener filter and a Self-Guided filter. Both are convolving filters that try to build a kernel to restore some lost quality of the original input image and are usually used for denoising and/or edge enhancement. For purposes of AV1, they’re effectively general-purpose denoising filters that remove DCT basis noise via a configurable amount of blurring.

The filter between the two, the Constrained Directional Enhancement Filter (CDEF) is the one we’re interested in here; like the loop restoration filter, it removes ringing and basis noise around sharp edges, but unlike the loop restoration filter, it’s directional. It can follow edges, as opposed to blindly filtering in all directions like most filters. This makes CDEF especially interesting; it’s the first practical and useful directional filter applied in video coding.

The Long and Winding Road

The CDEF story isn’t perfectly linear; it’s long and full of backtracks, asides, and dead ends. CDEF brings multiple research paths together, each providing an idea or an inspiration toward the final Constrained Directional Enhancement Filter in AV1. The ‘Directional’ aspect of CDEF is especially novel in implementation, but draws ideas and inspiration from several different places.

The whole point of transforming blocks of pixel data using the DCT and DCT-like transforms is to represent that block of pixels using fewer numbers. The DCT is pretty good at compacting the energy in most visual images, that is, it tends to collect spread out pixel patterns into just a few important output coefficients.

There are exceptions to the DCT’s compaction efficiency. To name the two most common examples, the DCT does not represent directional edges or patterns very well. If we plot the DCT output of a sharp diagonal edge, we find the output coefficients also form…. a sharp diagonal edge! The edge is different after transformation, but it’s still there and usually more complex than it started. Compaction defeated!

a sharp edge (left) and its DCT transform coefficients (right) illustrating the problem with sharp features

Sharp features are a traditional problem for DCT-based codecs as they do not compact well, if at all. Here we see a sharp edge (left) and its DCT transform coefficients (right). The energy of the original edge is spread through the DCT output in a directional rippling pattern.

Over the past two decades, video codec research has increasingly looked at transforms, filters, and prediction methods that are inherently directional as a way of better representing directional edges and patterns, and correcting this fundamental limitation of the DCT.

Classic Directional Predictors

Directional intra prediction is probably one of the best known directional techniques used in modern video codecs. We’re all familiar with h.264’s and VP9’s directional prediction modes, where the codec predicts a directional pattern into a new block, based on the surrounding pixel pattern of already decoded blocks. The goal is to remove (or greatly reduce) the energy contained in hard, directional edges before transforming the block. By predicting and removing features that can’t be compacted, we improve the overall efficiency of the codec.

AVC/H.264 intra prediction modes, illustrating modes 0-8.

Illustration of directional prediction modes available in AVC/H.264 for 4×4 blocks. The predictor extends values taken from a one-pixel-wide strip of neighboring pixels into the predicted block in one of eight directions, plus an averaging mode for simple DC prediction.

Motion compensation, an even older idea, is also a form of directional prediction, though we seldom think of it that way. It displaces blocks in specific directions, again to predict and remove energy prior to the DCT. This block displacement is directional and filtered, and like directional intra-prediction, uses carefully constructed resampling filters when the displacement isn’t an integer number of pixels.

Directional Filters

As noted earlier, video codecs make heavy use of filtering to remove blocking artifacts and basis noise. Although the filters work on a 2D plane, the filters themselves tend to be separable, that is, they’re usually 1D filters that are run horizontally and vertically in separate steps.

Directional filtering attempts to run filters in directions besides just horizontal and vertical. The technique is already common in image processing, where noise removal and special effects filters are often edge- and direction-aware. However, these directional filters are often based on filtering the output of directional transforms, for example, the [somewhat musty] image denoising filters I wrote based on dual-tree complex wavelets.

The directional filters in which we’re most interested for video coding need to work on pixels directly, following along a direction, rather than filtering the frequency-domain output of a directional transform. Once you try to design such a beast, you quickly hit the first Big Design Question: how do you ‘follow’ directions other than horizontal and vertical, when your filter tap positions no longer land squarely on pixels arranged in a grid?

One possibility is the classic approach used in high-quality image processing: transform the filter kernel and resample the pixel space as needed. One might even argue this is the only ‘correct’ or ‘complete’ answer. It’s used in subpel motion compensation, which cannot get good results without at least decent resampling, and in directional prediction which typically uses a fast approximation.

That said, even a fast approximation is expensive when you don’t need to do it, so avoiding the resampling step is a worthy goal if possible. The speed penalty is part of the reason we’ve not seen directional filtering in video coding practice.

Directional Transforms

Directional transforms attempt to fix the DCT’s edge compaction problems in the transform itself.

Experimental directional transforms fall into two categories. There are the transforms that use inherently directional bases, such as directional wavelets. These transforms tend to be oversampled/overcomplete, that is, they produce more output data than they take input data— usually massively more. That might seem like working backwards; you want to reduce the amount of data, not increase it! But these transforms still compact the energy, and the encoder still chooses some small subset of the output to encode, so it’s really no different from usual lossy DCT coding. That said, overcomplete transforms tend to be expensive in terms of memory and computation, and for this reason, they’ve not taken hold in mainstream video coding.

The second category of directional transform takes a regular, non-directional transform such as the DCT, and modifies it by altering the input or output. The alteration can be in the form of resampling, a matrix multiplication (which can be viewed as a specialized form of resampling), or juggling of the order of the input data.

It’s this last idea that’s the most powerful, because it’s fast. There’s no actual math to do when simply rearranging numbers.

A few practical complications make implementation tricky. Rearranging a square to make a diagonal edge into a [mostly] vertical or horizontal line results in a non-square matrix of numbers as an input. Conceptually, that’s not a problem; the 2D DCT is separable, and since we can run the row and column transforms independently, we can simply use different sized 1D DCTs for each length row and column, as in the figure above. In practice this means we’d need a different DCT factorization for every possible column length, and shortly after realizing that, the hardware team throws you out a window.

There are also other ways of handling the non-squareness of a rearrangement, or coming up with resampling schemes that keep the input square or only operate on the output. Most of the directional transform papers mentioned below are concerned with the various schemes for doing so.

But here’s where the story of directional transforms mostly ends for now. Once you work around the various complications of directional transforms and deploy something practical, they don’t work well in a modern codec for an unexpected reason: They compete with variable blocksize for gains. That is, in a codec with a fixed blocksize, adding directional transforms alone gets impressive efficiency gains. Adding variable blocksize alone gets even better gains. Combining variable blocksize and directional transforms gets no benefit over variable blocksize alone. Variable blocksize has already effectively eliminated the same redundancies exploited by directional transforms, at least the ones we currently have, and done a better job of it.

Nathan Egge and I both experimented extensively with directional transforms during Daala research. I approached the problem from both the input and output side, using sparse matrix multiplications to transform the outputs of diagonal edges into a vertical/horizontal arrangement. Nathan ran tests on mainstream directional approaches with rearranged inputs. We came to the same conclusion: there was no objective or subjective gain to be had for the additional complexity.

Directional transforms may have been a failure in Daala (and other codecs), but the research happened to address a question posed earlier: How to filter quickly along edges without a costly resampling step? The answer: don’t resample. Approximate the angle by moving along the nearest whole pixel. Approximate the transformed kernel by literally or conceptually rearranging pixels. This approach introduces some aliasing, but it works well enough, and it’s fast enough.

Directional Predictors, part 2: The Daala Chronicles

The Daala side of the CDEF story began while trying to do something entirely different: normal, boring, directional intra-prediction. Or at least what passed for normal in the Daala codec.

I wrote about Daala’s frequency-domain intra prediction scheme at the time we were just beginning to work on it. The math behind the scheme works; there was never any concern about that. However, a naive implementation requires an enormous matrix multiplication that was far too expensive for a production codec. We hoped that sparsifying— eliminating matrix elements that didn’t contribute much to the prediction— could reduce the computational cost to a few percent of the full multiply.

Sparsification didn’t work as hoped. At least as we implemented it, sparsification simply lost too much information for the technique to be practical.

Of course, Daala still needed some form of intra-prediction, and Jean-Marc Valin had a new idea: A stand-alone prediction codec, one that worked in the spatial domain, layered onto the frequency-domain Daala codec. As a kind of symbiont that worked in tandem with but had no dependencies on the Daala codec, it was not constrained by Daala’s lapping and frequency domain requirements. This became Intra Paint.

A photo of Sydney Harbor with some interesting painting-like features created by the algorithm

An example of the Intra Paint prediction algorithm as applied to a photograph of Sydney Harbor. The visual output is clearly directional and follows the edges and features in the original image well, producing a pleasing (if somewhat odd) result with crisp edges.

The way intra paint worked was also novel; it coded 1-dimensional vectors along only the edges of blocks, then swept the pattern along the selected direction. It was much like squirting down a line of colored paint dots, then sweeping the paint in different directions across the open areas.

Intra paint was promising and produced some stunningly beautiful results on its own, but again wasn’t efficient enough to work as a standard intra predictor. It simply didn’t gain back enough bits over the bits it had to use to code its own information.

A gray image showing areas of difference between the Sydney Harbor photo and the Intra-Paint result

Difference between the original Sydney Harbor photo and the Intra Paint result. Despite the visually pleasing output of Intra Paint, we see that it is not an objectively super-precise predictor. The difference between the original photo and the intra-paint result is fairly high, even along many edges that it appeared to reproduce well.

The intra paint ‘failure’ again planted the seed of a different idea; although the painting may not be objectively precise enough for a predictor, much of its output looked subjectively quite good. Perhaps the paint technique could be used as a post-processing filter to improve subjective visual quality? Intra paint follows strong edges very well, and so could potentially be used to eliminate basis noise that tends to be strongest along the strongest edges. This is the idea behind the original Daala paint-deringing filter, which eventually leads to CDEF itself.

There’s one more interesting mention on the topic of directional prediction, although it too is currently a dead-end for video coding. David Schleef implemented an interesting edge/direction aware resampling filter called Edge-Directed Interpolation (EDI). Other codecs (such as the VPx series and for a while AV1) have experimented with downsampled reference frames, transmitting the reference in a downsampled state to save coding bits, and then upsampling the reference for use at full resolution. We’d hoped that much-improved upsampling/interpolation provided by EDI would improve the technique to the point it was useful. We also hoped to use EDI as an improved subpel interpolation filter for motion compensation. Sadly, those ideas remain an unfulfilled dream.

Bridging the Gap, Merging the Threads

At this point, I’ve described all the major background needed to approach CDEF, but chronologically the story involves some further wandering in the desert. Intra paint gave rise to the original Daala paint-dering filter, which reimplemented the intra-paint algorithm to perform deringing as a post-filter. Paint-dering proved to be far too slow to use in production.

As a result, we packed up the lessons we learned from intra paint and finally abandoned the line of experimentation. Daala imported Thor’s CLPF for a time, and then Jean-Marc built a second, much faster Daala deringing filter based on the intra-paint edge direction search (which was fast and worked well) and a Conditional Replacement Filter. The CRF is inspired somewhat by a median filter and produces results similar to a bilateral filter, but is inherently highly vectorizable and therefore much faster.

A series of graphs showing the original signal and the effects of various filters

Demonstration of a 7-tap linear filter vs the constrained replacement filter as applied to a noisy 1-dimensional signal, where the noise is intended to simulate the effects of quantization on the original signal.

The final Daala deringing filter used two 1-dimensional CRF filters, a 7-tap filter run in the direction of the edge, and a weaker 5-tap filter run across it. Both filters operate on whole pixels only, performing no resampling. At that point, the Daala deringing filter began to look a lot like what we now know as CDEF.

We’d recently submitted Daala to AOM as an input codec, and this intermediate filter became the AV1 daala_dering experiment. Cisco also submitted their own deringing filter, the Constrained Low-Pass Filter (CLPF) from the Thor codec. For some time the two deringing filters coexisted in the AV1 experimental codebase where they could be individually enabled, and even run together. This led both to noticing useful synergies in their operation, as well as additional similarities in various stages of the filters.

And so, we finally arrive at CDEF: The merging of Cisco’s CLPF filter and the second version of the Daala deringing filter into a single, high-performance, direction-aware deringing filter.

Modern CDEF

The CDEF filter is simple and bears a deep resemblance to our preceding filters. It is built out of three pieces (directional search, the constrained replacement/lowpass filter, and integer-pixel tap placement) that we’ve used before. Given the lengthy background preamble to this point, you might almost look at the finished CDEF and think, “Is that it? Where’s the rest?” CDEF is an example of gains available by getting the details of a filter exactly right as opposed to just making it more and more complex. Simple and effective is a good place to be.

Direction search

CDEF operates in a specific direction, and so it is necessary to determine that direction. The search algorithm used is the same as from intra paint and paint-dering, and there are eight possible directions.

filtering direction with discrete lines of operation for each direction

The eight possible filtering directions of the current CDEF filter. The numbered lines in each directional block correspond to the ‘k’ parameter within the direction search.

We determine the filter direction by making “directional” variants of the input block, one for each direction, where all of the pixels along a line in the chosen direction are forced to have the same value. Then we pick the direction where the result most closely matches the original block. That is, for each direction d, we first find the average value of the pixels in each line k, and then sum, along each line, the squared error between a given pixel value and the average value of that pixel line.

Example illustrating determination of CDEF direction

An example process of selecting the direction d that best matches the input block. First we determine the average pixel value for each line of operation k for each direction. This is illustrated above by setting each pixel of a given line k to that average value. Then, we sum the error for a given direction, pixel by pixel, by subtracting the input value from the average value. The direction with the lowest error/variance is selected as the best direction.

This gives us the total squared error, and the lowest total squared error is the direction we choose. Though the pictured example above does so, there’s no reason to convert the squared error to variance; each direction considers the same number of pixels, so both will choose the same answer. Save the extra division!

This is the intuitive, long-way-around to compute the directional error. We can simplify the mechanical process down to the following equation: equation for determining filter directionIn this equation, E is the error, p is a pixel, xp is the value of a pixel, k is one of the numbered lines in the directional diagram above, and Nd,k is the cardinality of (the number of pixels in) the line k for direction d. This equation can be simplified in practice; for example the first term is the same for each given d. In the end, the AV1 implementation of CDEF currently requires 5.875 additions and 1.9375 multiplications per pixel and can be deeply vectorized, resulting in a total cost less than an 8×8 DCT.

Filter taps

The CDEF filter works pixel-by-pixel across a full block. The direction d selects the specific directional filter to be used, each consisting of a set of filter taps (that is, input pixel locations) and tap weights.

CDEF conceptually builds a directional filter out of two 1-dimensional filters. A primary filter is run along the chosen direction, like in the Daala deringing filter. The secondary filter is run twice in a cross-pattern, at 45° angles to the primary filter, like in Thor’s CLPF.

Illustration of primary and secondary filter directions and taps overlaid on top of CDEF filter direction

Illustration of primary and secondary 1-D filter directionality in relation to selected direction d. The primary filter runs along the selected filter direction, the secondary filters run across the selected direction at a 45° angle. Every pixel in the block is filtered identically.

The filters run at angles that often place the ideal tap locations between pixels. Rather than resampling, we choose the nearest exact pixel location, taking care to build a symmetric filter kernel.

Each tap in a filter also has a fixed weight. The filtering process takes the input value at each tap, applies the constraint function, multiplies the result by the tap’s fixed weight, and then adds this output value to the pixel being filtered.

illustration of primary and secondary taps

Primary and secondary tap locations and fixed weights (w) by filter direction. For primary taps and even Strengths a = 2 and b = 4, whereas for odd Strengths a = 3 and b = 3. The filtered pixel in shown in gray.

In practice, the primary and secondary filters are not run separately, but combined into a single filter kernel that’s run in one step.

Constraint function

CDEF uses a constrained low-pass filter in which the value of each filter tap is first processed through a constraint function parameterized by the difference between the tap value and pixel being filtered d, the filter strength S, and the filter damping parameter D: The constraint function is designed to deemphasize or outright reject consideration of pixels that are too different from the pixel being filtered. Tap value differences within a certain range from the center pixel value (as set by the Strength parameter S) are wholly considered. Value differences that fall between the Strength and Damping parameters are deemphasized. Finally, tap value differences beyond the Damping parameter are ignored.

An illustration of the constraint function

An illustration of the constraint function. In both figures, the difference (d) between the center pixel and the tap pixel being considered is along the x axis. The output value of the constraint function is along y. The figure on the left illustrates the effect of varying the Strength (S) parameter. The figure on the right demonstrates the effect of varying Damping (D).

The output value of the constraint function is then multiplied by the fixed weight associated with each tap position relative to the center pixel. Finally the resulting values (one for each tap) are added to the center filtered pixel, giving us the final, filtered pixel value. It all rolls up into:…where the introduced (p) and (s) mark values for the primary and secondary sets of taps.

There are a few additional implementation details regarding rounding and clipping not needed for understanding; if you’re intending to implement CDEF they can of course be found in the full CDEF paper.

Results

CDEF is intended to remove or reduce basis noise and ringing around hard edges in an image without blurring or damaging the edge. As used in AV1 right now, the effect is subtle but consistent. It may be possible to lean more heavily on CDEF in the future.

An example illustrating application of CDEF to a picture with ringing artifacts

An example of ringing/basis noise reduction in an encode of the image Fruits. The first inset closeup shows the area without processing by CDEF, the second inset shows the same area after CDEF.

The quantitative value of any enhancement filter must be determined via subjective testing. Better objective metrics numbers as well wouldn’t exactly be shocking, but the kind of visual improvements that motivate CDEF are mostly outside the evaluation ability of primitive objective testing tools such as PSNR or SSIM.

As such, we conducted multiple rounds of subjective testing, first during the development of CDEF (when Daala dering and Thor CLPF were still technically competitors) and then more extensive testing of the merged CDEF filter. Because CDEF is a new filter that isn’t present at all in previous generations of codecs, testing primarily consisted of AV1 with CDEF enabled, vs AV1 without CDEF.

A series of graphs showing test results of AV1 with and without CDEF

Subjective A-B comparison results (with ties) for CDEF vs. no CDEF for the high-latency configuration.

Subjective results show a statistically significant (p<.05) improvement for 3 out of 6 clips. This normally corresponds to a 5-10% improvement in coding efficiency, a fairly large gain for a single tool added to an otherwise mature codec.

Objective testing, as expected, shows more modest improvements of approximately 1%, however objective testing is primarily useful only insofar as it agrees with subjective results. Subjective testing is the gold standard, and the subjective results are clear.

Testing also shows that CDEF performs better when encoding with fewer codec ‘tools’; like directional transforms, CDEF is competing for coding gains with other, more-complex techniques within AV1. As CDEF is simple, small, and fast, it may provide future means to reduce the complexity of AV1 encoders. In terms of decoder complexity, CDEF represents between 3% and 10% of the AV1 decoder depending on the configuration.

Additional Resources

  1. Xiph.Org’s standard ‘derf’ test sets, hosted at media.xiph.org
  2. Automated testing harness and metrics used by Daala and AV1 development: Are We Compressed Yet?
  3. The AV1 Constrained Directional Enhancement Filter (CDEF)
    Steinar Midtskogen, Jean-Marc Valin, October 2017
  4. CDEF Presentation Slide Deck for ICASSP 2018, Steinar Midtskogen, Jean-Marc Valin
  5. A Deringing Filter for Daala and Beyond, Jean-Marc Valin
    This is an earlier deringing filter developed during research for the Daala codec that contributed to the CDEF used in AV1.
  6. Daala: Painting Images For Fun and Profit, Jean-Marc Valin
    A yet earlier intra-paint-based enhancement filter that led to the Daala deringing filter, which in turn led to CDEF
  7. Intra Paint Deringing Filter, Jean-Marc Valin 2015
    Notes on the enhancement/deringing filter built out of the Daala Intra Paint prediction experiment
  8. Guided Image Filtering Kaiming He, Jian Sun, Xiaoou Tang, 2013
  9. Direction-Adaptive Discrete Wavelet Transform for Image Compression, Chuo-Ling Chang, Bernd Girod, IEEE Transactions on Image Processing, vol. 16, no. 5, May 2007
  10. Direction-adaptive transforms for image communication, Chuo-Ling Chang, Stanford PhD dissertation 2009
    This dissertation presents a good summary of the state of the art of directional transforms in 2009; sadly it appears there are no online-accessible copies.
  11. Direction-Adaptive Partitioned Block Transform for Color Image Coding, Chuo-Ling Chang, Mina Makar, Sam S. Tsai, Bernd Girod, IEEE Transactions on Image Processing, vol. 19, no. 7, July 2010
  12. Pattern-based Assembled DCT scheme with DC prediction and adaptive mode coding, Zhibo Chen, Xiaozhong Xu
    Note this paper is behind the IEEE paywall
  13. Direction-Adaptive Transforms for Coding Prediction Residuals, Robert A. Cohen, Sven Klomp, Anthony Vetro, Huifang Sun, Proceedings of 2010 IEEE 17th International Conference on Image Processing, September 26-29, 2010, Hong Kong
  14. An Orientation-Selective Orthogonal Lapped Transform, Dietmar Kunz 2008
    Note this paper is behind the IEEE paywall.
  15. Rate-Distortion Analysis of Directional Wavelets, Arian Maleki, Boshra Rajaei, Hamid Reza Pourreza, IEEE Transactions on Image Processing, vol. 21, no. 2, February 2012
  16. Theoretical Analysis of Trend Vanishing Moments for
    Directional Orthogonal Transforms
    Shogo Murumatsu, Dandan Han, Tomoya Kobayashi, Hisakazu Kikuchi
    Note that this paper is behind the IEEE paywall. However a ‘poster’ version of the paper is freely available.
  17. An Overview of Directional Transforms in Image Coding,
    Jizheng Xu, Bing Zeng, Feng Wu
  18. Directional Filtering Transform for Image/Intra-Frame Compression, Xiulian Peng, Jizheng Xu, Feng Wu, IEEE Transaction in Image Processing, Vol. 19, No. 11, November 2010
    Note that this paper is behind the IEEE paywall.
  19. Approximation and Compression with Sparse
    Orthonormal Transforms
    , O. G. Sezer, O. G. Guleryuz, Yucel Altunbasak, 2008
  20. Robust Learning of 2-D Separable Transforms for Next-Generation Video Coding O. G. Sezer, R. Cohen, A. Vetro, March 2011
  21. Joint sparsity-based optimization of a set of orthonormal 2-D separable block transforms, Joel Sole, Peng Yin, Yunfei Zheng, Cristina Gomila, 2009
    Note that this paper is behind the IEEE paywall.
  22. Directional
    Lapped Transforms for Image Coding
    ,Jizheng Xu, Feng Wu, Jie Liang, Wenjun Zhang, IEEE Transactions on Image Processing, April 2008
  23. Directional Discrete Cosine Transforms—A New
    Framework for Image Coding
    ,Bing Zeng, Jingjing Fu, IEEE Transactions on Circuits and Systems for Video Technology, April 2008
  24. The Dual-Tree Complex Wavelet Transform, Ivan W. Selesnick, Richard G. Baraniuk, and Nick G. Kingsbury, IEEE Signal Processing Magazine, November 2005

More articles by Christopher Montgomery…

Original Link

Sexy dancing, smoking and secret events: How do you monitor live streaming?

Sexy dancing, smoking and secret events: How do you monitor live streaming? · TechNode

Original Link

The trends driving Chinese tech: Highlights from Mary Meeker’s 2018 Internet Trends Report

The trends driving Chinese tech: Highlights from Mary Meeker’s 2018 Internet Trends Report · TechNode

Original Link

Let’s Talk DevOps and Drawbacks

DevOps promises to bridge IT and development teams to build, test, and ship software, so what’s not to love? Despite claiming to increase efficiency and create faster cycle times, DevOps does not automatically foster collaboration between development and operations teams.

When teams are culturally siloed, making DevOps a reality can be a problem for businesses. Recently, GitLab Solutions Architect Victor Hernandez discussed how the current model of DevOps impacts a company’s workflow and delivery time.

What’s in the Webcast

To understand the consequences of an unsuccessful DevOps adoption, we examine a cautionary tale and explore the reality of DevOps, including a trip through nine deployment dysfunctions. We wrap up the webcast with a discussion of common barriers and offer three ways organizations can move towards a smoother transformation.

Watch the Recording

Key Takeaways

Adopting a DevOps model can be a challenging task for any organization, but there are three basic building blocks to help you prepare for a successful implementation of DevOps practices.

People

There needs to be a culture of collaboration between development and operations teams.

Process

Organizations should take care to develop reliable and repeatable processes that are implemented through technology.

Tools

By selecting tools that automate repeatable processes and create environments for deployments, organizations can streamline their efforts.

What has been your team’s DevOps experience? Tweet us @gitlab.

Original Link

How Do I Measure the Software Development Productivity? [Video]

The eternal question for organizations worldwide—how do you measure the productivity of your software development team?

There have been many attempts to answer this question, yet a solid measure continues to elude the industry.

For instance, counting output such as the number of lines of code produced is insufficient as there’s little point in counting lines that may be defective.

Quantifying input isn’t easy, either—do you count the number of individuals? The number of hours spent coding? The total hours spent working? What exactly is productivity in software development?

First, we need to establish how developers themselves perceive productivity. If we can determine what factors lead to perceptions of productivity, we can then look to recreate those factors and help developers feel more productive more often. And if a developer feels more productive, they’re more than likely to deliver better work faster.

To better understand how developers perceive productivity, researchers observed professional software developers from international development companies of varying sizes for four hours each. The findings—revealed in the white paper Understanding software development productivity from the ground up—identify the key factors that make developers feel productive, and provide compelling insight into how to eliminate the activities/tasks that drain developer productivity.

Speak to us today to learn more about how you can improve both the productivity of your development teams and the productivity of all other specialist teams that help you to plan, build, test and deliver software at scale. By focusing on end-to-end productivity, you can optimize your time to value to accelerate the speed and quality of your software products.

Original Link

CIO Panel Interview: The Digital Imperative for Software Delivery Transformation [Video]

Have you ever wondered how your CIO views topics like the need for software testing and digital transformation? At Accelerate 2017 we held our first-ever CIO panel interview, discussing questions like: How urgent is the business need for digital transformation? How does the board receive information like this? What is IT’s new role in the digital economy? The session was so popular, we are gathering a new set of CIOs, including Mahmoud El Assir, the CIO of Verizon, Barry Libenson, the GCIO of Experian, and Jennifer Sepull, the former CIO of USAA, for the upcoming Tricentis Accelerate San Francisco. Check out the video and transcript below for a sneak peek of what’s to come.

Full Transcript

Emmet Keefe: We actually had the global head of digital at BBVA speak at one of our events over the weekend, and he shared with us a very important insight at BBVA – which is that whenever they have a failure, they double down. That’s their philosophy. And so, Franz, to your point about Formula One, my original dream was to “start a Formula One team”, which I did in America. We started a company, a team in 2008 to go racing as an American Formula One team. Ultimately, we failed, and so I doubled down. My new dream is to win the championship, so, we’ll see what happens. What’s funny about my is dream is that when I used to come to Europe and share it, people would burst out laughing, which I really enjoyed.

So, this session is really a follow-up on the morning session from Todd Pierce, which I thought was absolutely extraordinary, and he really gave us three messages. One is that digital transformation is a board and CEO-down transformation that’s happening within every one of the Global 2000 companies, without question. Every board member is truly scared about digital transformation, so that was one key point.

Another one is that agile, DevOps, cloud, and software delivery transformation is on the radar of every CIO globally now. And the third point, which I was incredibly inspired by, is the role that transforming testing can play in unlocking this accelerated software delivery, and ultimately digital. So, we’ve got four leaders from very different types of businesses. I’m going to ask them about their digital transformation, so you can understand how different leaders and different businesses think about digital. I’m also going to ask them each to talk a little bit about their agile DevOps cloud transformation, so you can see some differences there, as well.

Before I do that, I’m just going to talk a little bit about our private equity firm and the program that I run called Inside Ignite. So, personally what I do, is I run a program called Ignite, which is about accelerating digital transformation. When I meet with a CIO, I typically ask them a question, which is what are the four biggest problems that you have to solve in order to unlock your digital transformation? I’m having those conversations all around the world, and I haven’t seen a single company or a single global CIO that can’t answer right away. This is a top of mind issue for every global company.

So, if I had a room full of CIO’s and if I asked the question, how many of you are going to Silicon Valley this year, every single hand would go up in the room. And the reason why they’re going to Silicon Valley is they want to find technology that’s going to help unlock and accelerate their digital transformation. What a lot of CIO’s don’t know is that actually in the Valley, those earlier stage venture capital firms are placing bets on futuristic ideas. So, they’re betting will the market want to solve this problem three years down the road, or five years down the road? That’s what you’re going to see in Silicon Valley, which is important. But actually to solve things today, you need technologies that are relevant to today’s problems, and that’s really what Inside Venture Partners and our Ignite program is all about.

So, just a bit about the firm. We’re the world leading private equity firm in the area of the market that they call software growth. Most of our portfolio companies are between 20 and 100 million in revenue/turnover when we invest. The second part of our vision is only software, so we never invest in hardware or services. The third part is to find businesses that are growing exponentially fast already. We look for businesses where the revenue has already gone from two and a half million to five million, to ten million, to 20 million, and when we find one of those, if we think it’s going to go from 20 to 40 to 80, then we make a growth acceleration investment. As you heard this morning, we invested 165 million in Tricentis. We’re extremely bullish and excited about Tricentis. We agree with Sandeep that there is a very large business that can be built here in this new, continuous testing world.

So as a firm, we currently have 13 billion under management, and each year we go through a fascinating process. Currently, we’re tracking about 100,000 software companies globally, so whenever somebody gets Series A funding or Angel Funding, they go into our database, and we start to track them from the very beginning. We have a team of 40 people that call almost 20,000 CEOs per year. So, that’s how we found Sandeep actually – one of the young analysts was calling, and I think Sandeep was ignoring them for some time. And then finally the analyst said, “well, can I come to your Accelerate conference?”, and that’s really what unlocked the conversation.

Out of those 15,000 conversations, we only find 2500 that we want to meet with. And then in the very end, we consider 250 investments, and we only deploy two billion in about 25 companies each year. So, it’s an incredible needle in the haystack exercise, and I wanted to show you this just to emphasize how special Tricentis really is. This is one of the fastest growing software companies on Earth, and we found them as one of 25 out of a pool of about 100,000 companies globally.

I want to throw up a couple other portfolios that you may know. We have 150 currently in the Portfolio Docker, I’m sure you’re aware of, and I’m sure your organization is leveraging microservices and containers and that whole movement. We made that investment about two and a half years ago, just as the business started to accelerate. We put 100 million in there to help them move faster. WalkMe is a very interesting business that provides self driving software, so it eliminates the need for training. So, when you’re done and the application goes into production, WalkMe is actually a Google Maps type application that can just show the user how to use it without actually having to put them through any type of training. Each one of these portfolios is fascinating, and there are many, many more in the portfolio.

When I hear from a CIO what their four issues are, I match analysts against those, and portfolios against those, and try and help accelerate solving those problems, and therefore accelerating the digital transformation.

So, Todd challenged you this morning to be brave, and to engage your CIO, and this will be a way that you can become more relevant on the digital agenda in the CIO’s office. There’s three ways that we engage with large global companies: one is that we create curated briefings on topics of the CIO’s choice. So, if they say AI is top of the radar right now, we’ll bring all of our analysts and all of our portfolios and see if we can accelerate the knowledge around AI. If they say DevOps is a topic, we can address that. Whatever the topic is, we can bring thought leadership and technology and try to accelerate the solution. Franz mentioned we produce thought leadership events around the world. We were in Tuscany this past weekend with 20 CIO’s having a very interesting discussion about IOT, about digital transformation. So we do these all around the world, and you can invite your CIO to come to one of these events. And the last thing we do is invite CIO’s to sit on the board of our portfolio and help them accelerate. So, actually, Rob and Vittorio and Irwin are all board members at Tricentis, along with Todd, to help think about how to accelerate the business faster.

So, with that, we’re going to transition over to the panel, and first I just want to go through … if you guys just quickly your career 30 seconds prior to Etihad, and then what you’re currently doing at Etihad.

Rob Webb: I’m Rob Webb, I’m the CIO of Etihad Aviation Group, and prior to that, I was the CIO of Hilton Hotels, the global hotel company, and I worked with General Electric and Equifax in the early parts of my career.

Emmet Keeffe: Great. Vittorio?

Vittorio Cretella: I’ve been 26 years with Mars where I was the global CIO, and about a month ago, I retired, and I became an independent advisor.

Emmet Keeffe: Very good. Andreas?

Andreas Kranabitl: I’m responsible for IT at the SPAR Austrian Group. I’m in this company over 55 years, and my passion is Formula One. So if you need somebody to do something for you-

Emmet Keeffe: Okay, the panel discussion is done.

Erwin Logt: I guess after we meet you, all our passions are Formula One. Good afternoon my name is Erwin Logt. I worked for Proctor and Gamble for about 18 years, the last couple of years in the US. Since 2013 I’ve been the CIO of FrieslandCampina, one of the leading dairy companies in the world. For the last two years, I have the pleasure of also being the Chief Digital Officer.

Emmet Keeffe: Fantastic. We’ve got a great group of panel members here, so I want to just investigate first how they’re thinking about digital transformation. Rob, we’ll start with you, if you could just talk-

Rob Webb: Well, I’m just going to kick this off by saying you are all very fortunate to be in this room because the opportunity that we all have is just enormous. You’ve got one of the world’s top venture capital firms that’s mid-stage, so all of the screening of the companies has already been done, and they’ve invested in a fantastic company, Tricentis, right here in Vienna, but also in Silicon Valley. And, the sweet spot that Tricentis is in with respect to automated testing is something that is in incredible demand around the world. So it’s a unique culmination of events because as a CIO, what I’m doing across Etihad and our equity partner airlines is trying to accelerate innovation, and that really means everything we’re doing with online and mobile, and all the rapid application and agile development we need to do is the most differentiating part of our application portfolio. And you’re becoming aware of, getting trained in, using, and buying software that will make your CIO’s survival rate higher. They’ll make your company more profitable, safer, and grow faster. And as a CEO, they love that.

Can you make my testing faster and get my new apps out there so I can be more competitive? Can you do that in a way that makes testing more automated and safer, and can you do that while you’re lowering costs? This is something that is very, very unique, and I think we all have a wonderful opportunity to be part of this revolution. As Todd really highlighted this morning, it takes each of us to commit to make this happen, and we can change the world.

Emmet Keeffe: So I had the privilege last summer to visit the innovation center in Abu Dhabi that Etihad built. I’m just curious – that was a massive investment. Can you talk about what drove the board to make that investment, and really just how the company is thinking about digital transformation?

Rob Webb: Well, you know, we’re the national carrier of the United Arab Emirates, and the country’s a very, very wealthy country, but they have these amazing aspirations. They want to build the very tallest buildings and the best education systems, the best healthcare systems, and they also want the world’s best airline. So that means new planes from Airbus and Boeing, but also it’s more than just the physical aircraft – it’s the service on board and the digital guest experience that goes with that service. That includes the online applications, the loyalty program, the mobile apps, the in-flight entertainment. So, you’ve heard the expression from Marc Andreessen and Bob Horowitz that software is eating the world. What Tricentis allows you and your companies to do, is accelerate how software is digitizing your businesses. So, you’re just, as I said, right at the sweet spot of an enormous opportunity.

Emmet Keeffe: Fantastic. When Sandeep and I first started recruiting CIO’s, I have to say actually, testing was not sort of the hottest topic in the world. So, it was fascinating when Sandeep and I started reaching out to some of the world’s most famous CIO’s. I think we reached out to 60 of them, thinking that maybe 20 would want to join this board. We actually had a 60 for 60 hit rate. Every single CIO responded and said, “I want to have a conversation about that and I want to consider joining the board of that company,” which I think shows just how relevant testing is for the digital transformation.

So, Vittorio, I know you’ve come from a very different type of business. If you could talk about how you think about digital transformation?

Vittorio Cretella: Sure, as a CPG-CIO, I think of digital transformation from the top line to the bottom line, and everybody gets the top line and why the front end, and the relationship with the customer, benefits from digitalization. We had a clear example where digitization and the development of digital solutions using DevOps make you closer to the consumer, creating value with digital factories that are a blend of the physical product and the data.

What many don’t get is that on top of digitalizing – that shiny, visible part of the iceberg – you need to digitize the whole of the company operations, and that includes your data asset and your enterprise system, your system of records, and your ERP. There are three fundamental reasons why you want to do that. And the first one is that data becomes equity, which historically is not really part of the successful model for a CPG. But now, it’s a big differentiator, and if you don’t get your internal data in order, let alone try to absorb and extract inside from a multitude of external data with the internet of things, with the closeness to consumers and digitization, you actually need to.

So that’s the first reason. The second reason is speed, and we have several examples. When you have a merger or an acquisition, what stands in the way is the integration and the regression testing of ERP, especially when you talk about a global footprint. So, products like Tricentis Tosca would massively reduce the time to market; the lead time to make those changes happen.

And the third one, last but not least, is efficiency – because for any CIO who needs to digitize, part of the funding for that initiative comes from rationalizing and making your IT and your enterprise system more agile. And we have a typical example in that the next frontier to making your enterprise system more efficient is to automate. Testing is at least, as I said this morning, 40% of that effort. So, we have an automation expert center, that is looking at all the transversal processes in operations, as well as the development expert center adopting DevOps for ERP, and both of them using tools like Tricentis Tosca, or looking at a tool like Tosca to speed up and deliver that part of efficiency. So, again, everybody looks at the top line, and clearly, there is a massive differentiation with digitization on the way you craft the consumer value proposition. But we shouldn’t forget the system of records and the massive benefit of digitalizing your operations.

Emmet Keeffe: That’s great. Thank you, Vittorio. We spent the last day and a half here in Vienna with a room full of 15 CIO’s. These are all members of our Growth Advisory Board. The question we were asking is, if you’re calling on a global CIO at a business of this scale, how do you explain Tricentis in such a way that the CIO will actually sponsor that transformation? I know for many of you, if you have the right CIO level sponsorship, it would accelerate everything that you were trying to do with continuous testing and Tricentis. And what’s interesting is that a lot of times, when you reach out to a CIO on continuous testing, they’ll ping their testing organization or their vendor partner and ask, “are we doing test automation?”, and the answer comes back up, yes, we are. What they don’t say is we’re doing UI level automation, and we’re not really doing automation of the core. So this is actually one of the things that came out of the last couple of days that’s really a fundamental value proposition of Tricentis: end-to-end testing across the entire net new and legacy infrastructure.

Vittorio Cretella: I’d like to add something. I do remember the head of our development factory telling me we don’t automate testing because we don’t have the money to pay for scripting. And you know, if we’re script-less, that problem goes away.

Emmet Keeffe: Perfect. That’s great. So Andreas, you obviously come from a more consumer-oriented business. If you could maybe talk about when that pressure really started building on the digital transformation, and where you are in the journey with digital?

Andreas Kranabitl: I think we’ve been in a digital tsunami over the last three years, so the effect on the retail business is very strong. We are working hard to optimize the digital processes using digital systems by I do not know how many thousand products per months. One big part of this digital story is really to optimize the existing processes to be much faster.

The second area, of course, is new business models, which are really possible with digital. So, coming closer to the consumer – here in Austria a store’s opening hours are limited, so I have to deal with how we can really meet the customer all the time and on the weekdays.

And the third focus area is the digital customer experience, so we are talking about of course mobile apps and more. This means that the digital challenge is around all the companies, so we’re not talking only about online shopping, for example – we are talking through the whole company. The last few years we have been working hard on that. But it is much easier to digitize the existing stuff.

In the past, there was the cushion in between the IT and the consumer. But, going digital with this customer experience, using a mobile application, we are dealing directly with IT and dealing directly with the consumer, which places IT in a very strategic position in the company. That position ranges from the supporting role to a strategic one, because we are talking to the consumer and is it challenging to understand how they are thinking. We have built new disciplines inside the IT organization, so we have to really understand how the user is reacting and how to really achieve this customer experience.

As we come into the digital experience with IT projects, we need a permanent department. Deploying functionality day by day, or hour by hour, is a big challenge for the organization. If you would have asked me one year ago, I would have said that testing is boring. Now, testing is strategic and I’m very happy that my colleagues started to implement Tricentis in all these testing processes years ago in the legacy world.

So now, we are really ready to service the new world.

Emmet Keeffe: That’s great. Well, that’s exciting for everyone in the room and confirms exactly what Todd was saying this morning. We have one more quick question for you: this weekend, we had two chief digital officers talk at this event in Tuscany. One was from BBVA, and the other was from Schindler. Very, very different businesses, and one of the things that struck me was hearing BBVA’s strategy versus Schindler’s strategy. I realized that Schindler is a hardware company and BBVA is really a software company, and therefore they had really different digital strategies, and I’m curious. Are you beginning to think of your self as a software company now, and do you think retailers are going to ultimately be heading in that direction?

Andreas Kranabitl: I think this is similar to our situation. Our board and our owners are really thinking in the future. But, I think we have to convince the more senior guys. Our board leader is always saying, Mr. Kranabitl, you must be aware we are not going to be an IT department. We are a retailer. But, I think the understanding now is more and more that IT, as I said before, is moving in a very strategic position. More and more, I think we’re also moving into a position to really to lead this transformation, so inside SPAR, we are talking about digital and innovation.

Emmet Keeffe: That’s great.

Andreas Kranabitl: I think we’re really now a business driver, for example, organizing and doing innovation workshops with the business to explain to them where the future is and what technology can really do into the business model. So, this is our position – I am willing to trying this. So, this is really a cool situation because I think that they’re really starting to see the changes and that’s really the point.

Emmet Keeffe: Okay, great. Well, another really interesting session that we had this weekend was called IT In the Boardroom, and we had the CIO of Rolls Royce, who’s just finishing up a big transformation at Rolls Royce. And two things he left us from his keynote: he said, don’t ever underestimate the lack of knowledge that a board director has about technology, and don’t ever underestimate their uncomfortableness to share that with you. So, this actually a big challenge that CIO’s have: they’ve got pressure from the board level regarding a digital transformation, but the board members don’t really know what it is, and they’re also afraid to admit that.

Erwin, same thing, I’d like to hear about your business. I know you’re just are taking this Chief Digital Officer role, as well. If you could just talk about what caused the business to head in that direction and how are you thinking about the digital strategy?

Erwin Logt: First of all, I heard a few of my partner panel members say testing is boring. I don’t think it’s boring. I think it’s actually pretty cool. I have to admit, before I became a member of the advisory board of Tricentis, I didn’t know that much about testing. And, I totally underutilized and underestimated it. I should say, the amount of time we spent on it, and the opportunity to really transform testing – how you can drive some speed and quality on the side… but, I’ll come to that in a second.

So FrieslandCampina, for those of you who don’t know it, is a big dairy player, multinational, about 12 or 13 billion. We sell milk, yogurt, that kind of stuff. For about 80%, we are a fast mover, consumer goods company. So, it’s very similar to Mars, to a certain extent. And about 20 to 30% is B to B, where we take ingredients out of dairy and we sell it, for example, to pharmaceutical businesses. Now, about three years ago, with the executive board, we said, “Okay, digital; the world is changing whether we like it or not. We have to change, we have to adapt.” To a certain extent, we looked at it from an opportunity point of view. To a certain extent, we looked at it from a threat point of view. The recent examples, for example, like Blockchain, is an interesting one that is used to track or to trace the ingredients of food. And, we have to decide whether we want to play or not play.

Anyway, we declared a strategy. We called it Embrace Digital, and to be very honest, it probably, like many other companies, it took us about a year to figure out what we really meant with that, and where we really wanted to move the needle. What were the priorities and how do we measure success, etc.?

Right now, we have a digital strategy, if you’d like, and we are prioritizing three areas: one is actually the commercial domain, so we’re looking at digitized marketing, but if you’d like, we want to set up our marketeers successfully in the new, digital world, with all kinds of tools and technologies. The second one is, of course, that we want to set up a whole new e-commerce channel. And, to a certain extent, we want to drive a new business model, which for us is direct to consumer sales. So, we have a few products where the profit margins allow us to sell it directly to consumers with all kinds of interesting challenges, etc.

The second priority is analytics. Combined, obviously, with elevating the first priority, there’s more and more data becoming available. There’s more demand for real-time insights, whether it’s consumer analytics, consumer behavior, or just business performance. And, the third priority is what we call the digital workplace or the employee. We truly believe that one the one hand, we are investing in customer experience and transforming that, but we also believe in transforming the pre-experience, and therefore elevating the performance of the company.

Now, last but not least, of course, like all of us, we are challenged to a higher demand for speed in terms of technologies, and bringing value to market. Of course, it always needs to be cheaper in our business, and we cannot drop in quality. As a matter of fact, quality needs to increase, so it’s an end-to-end-to-end game. And then like I said, things like Agile and DevOps, are coming up. I wouldn’t say we are a front-runner, but we are catching up quite quickly, and with that, obviously, we are now very much aware of the pain points around testing, especially around the lack of speed, if you’d like, in those areas, both on the operational backbone, as well as on the new fancy stuff. The apps and the websites are very much forward also. We are trying to transform that part of the business.

Emmet Keeffe: Thank you very much. Franz mentioned prior to entering the private equity world, in the year 2000, I spent about a year interviewing analysts, project managers, CIO’s, heads of development, and I was asking the question, “if you’re trying to make software delivery go lightning fast, what gets in the way of that speed?” And the answer I kept hearing over and over again was the upfront requirements phase, which is what bogs everything down when you’re trying to get software down at speed. So, we invented the market for software simulation, and what I spent 17 years working on was trying to create a real-time, collaborative, prototyping platform for product managers and business analysts, so they could actually collaborate on design solutions with the business. Unfortunately, it was an idea that was about 30 years ahead of its time. But actually, the market is catching up. Etihad, they’re spending time and money on design thinking – what is the latest around how you rapidly visualize new, innovative ideas? I think, an interesting question might be, if your organization is involved in this sort of early, upfront prototyping, how can you get engaged from a testing standpoint earlier, and then help make sure they do testing the right way as they head into the development process?

So, I’ve asked Rob just to talk a little bit about design thinking. What are you doing?

Rob Webb: Well, we’re running short on time, but I’d kind of summarize it by saying that these technology and process changes move very, very quickly. And, for everyone in the room, there’s a huge opportunity to be a change agent, to inform your CIO, inform your head of development about the new Agile design thinking, DevOps world, and to be a champion for the changed behavior. And through that process, in addition to inviting them to connect with Emmet’s Venture Capital firm, which I really think they’d welcome that opportunity, you also have the opportunity to help make them heroes, and to make yourself a hero because you’re going to increase speed, reduce risk, and improve the quality and cost structure of that testing, And there’s many ways to do that. But, it means taking a bit of risk and being a change agent inside your technology organization.

I would just leave it at that. You can read up on design thinking, it’s a customer-first approach to solving problems in agile, bite-sized ways, and it’s working for us. But, I think everyone in the room has a huge opportunity.

Emmet Keeffe: That’s great. Final question and then we’ll wrap it up. We’ve got to get some of these gentlemen on an airplane. Andreas, I want to ask you when did this testing topic hit your radar? How did it hit your radar? I’m just curious. Within your organization, how did they bring it to you?

Andreas Kranabitl: It did it really when we started to go online, enabling online shopping, and learned that this is not a project which starts and ends. After the project is finished, the next project is starting. So, we said that this was a permanent process, and we made the requirements never-ending, and I think the main problem was speed. I really couldn’t understand it because I’d been trying to stay on the business side, and if there are coming requirements, I would have to say yes. And really that was the state of the art online customer requirement: we need this functionality, that functionality, and we deploy functionality on daily basis. The message or the topic of the year is permanent deployment in our companies, and we are working hard on that, and now this was really the point because I really have to clarify, that testing is boring.

But now I really understand how strategic testing is and how important it is this in this environment, because before in this classical waterfall project, it was something yeah at the end of the project. It was done by the business people or by some IT people, which was not really of interest. But, now I think testing is really strategic because first of all, you have to save money, you have to save time, and you have to save the time in the testing environment. And I think the sooner you start to do testing, I think it’s more efficient.

I think we have to mention that if you ask me what is the most important focus in the digital transformation, it’s people. So, and I think we cannot use people for testing. There is much higher work to do. We need future-oriented staff, and to not make them suffer by doing manual or needless testing. I think this is very important from my point of view.

Emmet Keeffe: That’s great. Well just in summary, over the last day and a half, there’s really three or four things we heard from this room full of CIO’s. One is that they are under tremendous, tremendous pressure from the board to accelerate this digital transformation. The second thing we heard from them is that they’re looking for opportunities to automate within the IT budget. Typically, 80% of the budget is spent to sort of run the business, and only 20% is on the innovations, so they’re looking for ways to shift that money over. And in our investment portfolio, we’re seeing more and more businesses that are growing fast that have automated something. In this case, it’s testing. We just invested in one that’s automating incident diagnosis and incident resolution. Many of our investments are somehow in the automation space. And the reason why is that CIO’s are looking for a budget that they can release, and redeploy towards digital. The other thing we heard from the CIO’s is that they desperately want speed. I mean, they want 30-day type speed. Not three month, six month, 12 months type speed. And, the last thing they want is everything done at a lower cost: more efficiency.

So, I couldn’t agree more this morning with Todd. I think you’re in an absolutely extraordinary position to have a really fun ten years here as we go through this transformation. And I also think from a career development standpoint, if you are brave enough to elevate your story up into the office of the CIO, I think your career will accelerate as well.

So, with that, I want to thank Todd from this morning and our four panel members here. Thank you, everybody.

Original Link

Build Automation and Release Management With VSTS/TFS 2018 [Video]

Learn how to get started with Release Management and its advantages. See how to create a build definition using CI/CD Tools for VSTS Extensions (I will be using Package Extension and Publish Artifact tasks), and also using DevOps-VSTS-POC trigger in order to enable CI, all of that in order to be able to publish, share, install and query versions. You will see how to create release definition, choose an artifact and configure source for the artifact and default version. See how to create different environments or clone the existing one, in my case I am going to create QA, Preproduction and Production environment, each with one phrase and one task. See also how to configure Publish Extension task for each environment See an end-to-end continuous delivery pipeline using VSTS extension with Build and Release Management.

Original Link

Continuous Discussions (#c9d9) Podcast, Episode 86: Human Factors [Podcast]

This morning on our Continuous Discussions (#c9d9) podcast, our panel talked about Human Factors in IT, and how these factors impact the way Dev and Ops create and operate software – including aspects like maintainability, quality, testing, security, release process and usability.

During the discussion, we touched on the “lens” or perspective individuals have, the work they perform and organization-based factors that set the context for human operations and influence our performance and system reliability.

  • What are human factors as they relate to Human/IT safety?
  • What are the key performance influencing factors affecting DevOps?
  • What patterns can help us identify and mitigate negative factors?

Key Links

Once you’ve watched the replay, please visit the links that our speakers shared during the program if you’d like to continue the learning!

Featured Panelists

Dominica DeGrandis
Author: Making Work Visible. Director, Digital Transformation @Tasktop. Teacher of lean/kanban/flow to DevOps enthusiasts. @dominicad | ddegrandis.com

John Allspaw
Former CTO, Etsy. Dad. Author. Guitarist. Student of sociotechnical systems, human factors, and cognitive systems engineering. @allspaw

J. Paul Reed
Build & release engineering / DevOps / human factors; Managing Partner at Release Engineering Approaches: Simply ship. Every time. @ShipShowPodcast alum. @jpaulreed | medium.com/@jpaulreed

Jessica DeVita
Microsoft Sr. PM Visual Studio Team Services (VSTS), deeply interested in human factors, cognitive engineering, and user experience. @ubergeekgirl

Watch the Replay of the Episode

Continuous Discussions (#c9d9) podcasts air monthly (and sometimes more!)

See all episodes here.

Original Link

A new video series: Web Demystified

We don’t have to tell you that video is a key channel for sharing information and instructional skills especially for students and developers who’ve grown up with YouTube. At Mozilla, we’ve always been a leader in supporting the open technologies that bring unencumbered video into the browser and onto the web.

But on top of the technology, there’s content. In 2018, Mozilla’s Developer Outreach team has launched some projects to share more knowledge in video. Earlier this year, Jen Simmons set a high bar with the launch of Layout Land, a series about “what’s now possible in graphic design on the web — layout, CSS Grid, and more.”

This post introduces Web Demystified, a new series targeting web makers. By web makers, I have in mind everyone who builds things for the web: designers, developers, project and team managers, students, hobbyists, and experts. Today we’ve released the opening two episodes on the Mozilla Hacks YouTube channel, introducing web basics.

Our goal is to provide basic information for beginner web makers, at the start of their web journey. The subject matter will also serve as a refresher on web fundamentals.

Our starting point

To begin, there is one question that needs to be answered: What is the web? And voila, here is our opener:

What to expect next

The next four episodes cover some basic technologies at the heart of the web (HTML, CSS, JavaScript, and SVG). We will release a new show every couple weeks for your viewing pleasure. And then we will continue our journey into details, covering stuff like: how the browser works, image formats for the web, domain names, WebAssembly, and more…

As an added attraction, here is Episode #1 (the second show). It’s all about HTML:

An invitation to participate

In true Mozilla fashion, we’d welcome your help sharing this new content and helping us promote it.

  • If you enjoy those videos, please like them on YouTube, and share them with your friends, colleagues, family, and networks.
  • If you have constructive feedback on the series, please share it here in comments. (Reminder: these shows are aimed at beginners and we aim to keep them brief.)
  • In general, if there are topics you wish to see covered, tell us and if you have questions about the content itself: Ask!
  • Last but not least, if you’re not a native English speaker, please feel free to translate the video captions into your own language. Many people will thank you for that.

Enjoy Web Demystified! And see you in a fortnight.

Jeremie is a long time contributor/employee to the Mozilla Developer Network, and a professional web developer since 2000. He’s advocating web standards, writing documentation and creating all sort of content about web technologies with the will to make them accessible to everybody.

More articles by Jeremie Patonnier…

Original Link

WebRTC: Boon for End-User Experience, Burden for Networks?

Like just about any other technology, WebRTC has followed the hype cycle pretty closely (were you surprised to see a headline on this topic in 2018?). WebRTC, an open-source project, aims to boost mobile apps and browsers to real-time speed using APIs. After reaching peak hype around 2013, we slipped down into the trough of disillusionment. However, at times it may feel like we’ve reached an even deeper level of disappointment with WebRTC.

After years of vendor and developer misalignment, it’s become clear that the promise of direct endpoint-to-endpoint communication (eliminating the need for complex backend infrastructure) is more of a dream than a destination, even as cloud and SaaS collaboration tools evolve and gain enterprise adoption.

That is, unless you take a closer look at the actual WebRTC situation, according to Irwin Lazar and Nemertes Research. In fact, 28% of survey participants have WebRTC either supported or planned for web-based voice and video chat.

While plugin-free, browser-based peer-to-peer communication will be a boon for your end users, it may cause problems in your network.

Changing the WebRTC Vision

“No WebRTC-based services thus far challenged the dominance in the consumer space of Microsoft Skype, Google Hangouts and Apple FaceTime for voice and video calling, or have disrupted traditional UC vendors in the enterprise. WebRTC hasn’t eliminate dedicated softphone apps, nor has it yet led to widespread click-to-call implementations enabling website visitors to speak with companies without having to pick up the phone.”-Irwin Lazar, Nemertes Research

Needless to say, WebRTC hasn’t come to fruition the way you may have expected. And as a result, IT teams have optimized their network management and monitoring to accommodate traditional VoIP and video services.

At first glance, this may seem like it sets you up for success with WebRTC-based communications, too. But if you thought communications services seemed bandwidth-hungry and unwieldy before, just wait until WebRTC comes in and enables unlimited access to communication between employees, partners, and customers across your network.

WebRTC isn’t coming in to replace UC/UCaaS vendors as people may have once believed. However, that doesn’t mean you can sit back and wait for it to burden your network. WebRTC demands a closer look at end-user experience monitoring.

The New(ish) World of WebRTC Visibility

Think about how you monitor your existing VoIP and video traffic. You deploy performance monitoring solutions on the wire of your backend communications infrastructure, set thresholds for MOS scores, voice loss, jitter and other metrics, and proactively address any quality concerns.

But what happens when that backend infrastructure is bypassed? Those reliable communications metrics become more of a mystery and you’re left with server monitoring as opposed to service monitoring.

The key to actually monitoring WebRTC services is to maximize visibility into the voice/video traffic being sent over this specific set of protocols. This means reevaluating your management, development and monitoring strategies to include QoS policy management as well as performance metrics specifically geared toward WebRTC.

The bottom line is that you can’t just sit back and let your existing VoIP and video planning account for WebRTC. That’s just asking for trouble, which you’ll get as soon as you see metrics that say connections are great and users are still complaining about call quality issues.

Make sure you have the means to not just monitor communications infrastructure, but individual VoIP/video packets as well. If you want to see what this kind of visibility looks like in action, check out this case study with a leading UCaaS provider.

Original Link

Avoiding the DevOps Tax [Video]

With the influx of DevOps-related products and services on the market, today’s application delivery toolchain has become complex and fragmented, resulting in more time spent on integrating tools instead of software innovation. Mark Pundsack, Head of Product at GitLab, and guest speaker Christopher Condo, Senior Analyst at Forrester, recently met to discuss the current state of DevOps automation and how IT leaders can unlock themselves from today’s toolchain to avoid the “DevOps tax.”

What Is the DevOps tax?

In a typical DevOps toolchain, lots of different tools are tied together to deliver DevOps. You have different tools for planning, code creation, CI and security testing, packaging, release and deploy, configuration management, and monitoring.

But administrating all these products and connecting them together is complex. For example, your CI needs to talk to your version control, your code review, your security testing, your container registry, and your configuration management. The permutations are staggering, and it’s not just a one-time configuration – each new project needs to reconnect all these pieces together.

That’s the DevOps tax: time spent on integrating and maintaining complicated toolchains, limiting your efficiency.

What’s in the Webcast

Before we dive into the DevOps tax and how to avoid it, we start by looking at digital transformation and current trends in DevOps, leading up to the DevOps tax, and then offering some best practices for reducing friction.

Watch the Recording

Key Takeaways

The Digital Transformation Imperative

Customer experience is key.

“The people with the bad customer experience, their stock is lagging those companies that have an excellent customer experience. That’s showing you that customer experience really matters.” – Christopher Condo

Expect disruption.

“The common thread is placing the customer first. If there’s a place where the customer’s not being placed first, and some company can come along with an innovative way to do it, it seems like the government is open to it and customers are certainly open to it as well.” – Christopher Condo

Trends in DevOps

Better integration of tools:

“I just ran a Wave on continuous integration tools and customers told us loud and clear that they are looking for a complete, integrated toolchain because they’re tired of integrating their own toolchain. It’s great to have the integrated tool chain but it comes at a cost.” – Christopher Condo

Better integration of teams:

“They want to be able to check in with the security expert and say, “Here’s our design, here’s our architecture, here’s how we’re handling these problems. What are we missing? What do we need to be doing next?” All of those teams sort of act as shared resources, they don’t act as blockers on a particular project.” – Christopher Condo

Containers are critical.

“Containers allow folks to worry about what they’re best at rather than trying to have everybody know everything” – guest @forrester via @GitLab webinar

What Is the DevOps Tax?

“When it’s a pain to integrate security, how many teams just don’t bother? Or when it’s a pain to share information between teams, how many organizations overcome that burden and find a way to work together? How much impact does this tax have on collaboration? With separate tools and separate processes, we’re naturally encouraging separate silos where functional teams work in isolation.” – Mark Pundsack

Concurrent DevOps

“When the entire DevOps lifecycle is seamless, magic starts to happen. Teams can work concurrently, not sequentially” – @MarkPundsack via @GitLab

DevOps Best Practices

  • To maximize your digital transformation, you need to optimize your CI/CD pipeline, create integrated product teams, and modernize your application architecture with microservices and a cloud-native approach.
  • Avoid the DevOps tax by reducing the number of integration points in your toolchain, integrate as deeply as you can, and strive for a single conversation across development, operations, security, and business.
  • If you’re just getting started, start with continuous integration. Automating tests and building confidence in your code will pay dividends many times over.
  • If you already got CI, then move on to continuous delivery. Automate deployments and make them less scary. If you already started the DevOps transformation, then embrace the culture. You can only go so far when there’s a wall between dev and ops.

Original Link

Effective Pipeline Architecture: Patterns for DevOps Success [Video]

Last week I had the pleasure to speak at the Application Architecture Online Summit about best practices for effective pipeline architecture to enable you to successfully adopt and roll out DevOps practices in the organizations.

As I often say, the architecture of your delivery pipeline, like the architecture of your app itself, has a huge impact on your DevOps efforts and release success.

My presentation covered 5 key principles and proven patterns for designing your pipeline and processes to enable effective DevOps, at any scale.

Check out the recording of the talk to learn best practices for streamlining your releases, ensuring predictable deployments, enable security and compliance, and scale your DevOps adoption throughout the company.

Effective Pipeline Architecture: 5 Pillars to DevOps Success

Watch the recording of the talk!

Original Link

The Shifting Landscape of Mobile Test Automation and the Future of Appium – presented by Jonathan Lipps [Video]

5 years ago, mobile automation was in its infancy. None of the tools that enabled testing of mobile apps was very comprehensive, but on the other hand, there were a lot of open source options. Nowadays, the players and the playing field are different, and Appium came to dominate the open source mobile testing scene.

In this talk, expert Jonathan Lipps gives an exposition of the mobile testing landscape. He talks about what writing tests looks like with each of the current tools, and discuss when each might be a good (or bad) choice. In addition, he’ll share his reflections on increasingly popular modes of testing beyond functional testing (like visual testing, for example), and what challenges might lie ahead for testers.

“There’s a lot at stake in how we invest in mobile testing, and this talk will be an exhortation for everyone involved in the industry to participate in shaping a better and more stable future” – Jonathan Lipps

Key takeaways from Jonathan’s session:

  • History of mobile automation
  • In-depth overview of the current technology and trends
  • Set of factors to use when picking the technology that’s right for you
  • All about Appium’s vision for the future

Jonathan’s slide-deck can be found here – and you can watch the full recording here:

Original Link

Developing Secure Scala Applications With Fortify for Scala [Presentation]

From banks to airlines to credit rating agencies, security continues to be a major focus for organizations across various industries. As the newspapers show, it’s heavily damaging to enterprises when security vulnerabilities in their code, infrastructure, or open source frameworks/libraries get exploited.

The good news is that your Scala development team now has a powerful ally for securing their applications. Co-developed by the Fortify team along with Lightbend, the upcoming Fortify for Scala Plugin is the only Static Application Security Testing (SAST) solution to use the official Scala compiler. This plugin automatically identifies code-level security vulnerabilities early in the SDLC, so you can confidently and reliably secure your mission-critical Scala-based applications.

In this webinar by Seth Tisue, Scala Committer and Senior Scala Engineer at Lightbend, and Poonam Yadav, Product Manager for Fortify at HPE (now Micro Focus), you will learn about:

  • Some of the more than 200 vulnerabilities that the Fortify plugin for Scala can catch and help you resolve.
  • How the plugin works to analyze, identify, and provide actionable recommendations.
  • How to integrate it into your modern DevOps environment.
  • Why this plugin was co-developed by Lightbend and the Fortify team, and how it benefits your organization’s security professionals/CISO office.

Review the Slides

Original Link

#AskFirebase: Predictions, Identity and More, Oh My… [Video]

Ask Firebase is a show where you can ask questions on social media, tag them with #AskFireBase, and the team will pick them up and try to answer them. I got to be a guest on this week’s episode, where I answered questions on identity, data, and Firebase Predictions. Way too much fun, and I think we did it all in one take. Even if I was chewing gum at one part (but I did it well, didn’t I?).

Anyway, check it out here:

And check out more questions (and answers) on the #AskFirebase playlist!

Original Link

Video: Bilibili’s dance cover stars



Video: Bilibili’s dance cover stars · TechNode



























Original Link

MediaTakeOut Founder Launches YouTube Alternative for Kids

There have been a lot of fear-based conversations from parents who have no idea what their kids are digesting on YouTube. With the platform now garnering more popularity than television, there has to be some stipulation surrounding what their children are consuming. Additionally, according to a recent press release, as of Jan. 1, millions of Amazon Fire Stick subscribers lost access to YouTube on their devices, thus leaving millions of subscribers without the direct access they were accustomed to.

The reason, according to the release, was that YouTube’s algorithm allowed ruthless, graphic, and at times sexually explicit adult material in videos, many aimed at children. These graphic videos were available when users searched for popular fictional children’s characters like Elsa, Spider-Man, and Peppa Pig. The inappropriate videos would also show up on children’s video feeds, because of the algorithm. YouTube’s response was to delete millions of videos from the platform that fell into that description; however, it didn’t permanently fix the problem because the algorithm still exists.

Tube Jr. is a new digital platform aimed at creating a safe alternative to YouTube’s search engine. How are they accomplishing this? They claim to be using real live humans instead of the robotic algorithm. The app is not relying solely on technology to feed kids content, but rather employs a team of professionals, many who are parents, to screen all material before it uploads to the application and is viewed by a child.

Fred Mwangaguhunga (Image: File)

Fred Mwangaguhunga (Image: File)

The creator of the app is African American Wall Street lawyer and entrepreneur, Fred Mwangaguhunga, who is also the founder of MediaTakeOut.com but more importantly, the father of 7-year-old triplets. He decided it was time for a change when his kids were met with algorithm-sourced inappropriate content.

Tube Jr., is currently available on all platforms, including Amazon Fire Stick. According to the feedback, parents are super-excited about the product and the ability to trust that their child is engaging in content that is highly curated.

Join the Conversation

Original Link

Video: Inside a Chinese girls’ e-sports team training spot

E-sports, or competitive video gaming, is quietly on the rise in China. We visited a training spot of KillerAngel, an all-female e-sports club, in Shanghai to find out how gamer girls are professionally trained.

If you can’t see anything, try QQ video instead.

This is how e-sports teams are trained in China. The clubs usually rent houses on the outskirts of metropolitan areas, where professional players live together to focus on their training. Often times before a tournament, coaches and managers would arrange “friendly” matches with the other listed teams—to spy on each other.

KillerAngel (KA, 杀戮天使 in Chinese) Women’s E-sports Club is one of the best professional teams in China. Just before Christmas, the six-woman team, who were top players in League of Legends, pocketed the championship of 2017 NTF Women’s Super League in China.

Read also: The quiet rise of China’s $3 billion e-sports market

“With more young girls becoming professional (e-sports) players, nurturing the young talent is fulfilling for me,” said Nini, a former professional gamer on KillerAngel. She currently works as the team’s manager.

Valued at $3 billion in 2016, China’s e-sports market is expected to hit 220 million spectators by the end of 2017. To put that into perspective, one in every six people in the country will have watched e-sports matches and have some understanding of the matter.

Original Link

The Czechoslovak Animated Movie That Speaks Best of “Technical Debt” [Videos]

Growing up in Yugoslavia in the 80’s involved watching a lot of animated movies. These movies were primarily eastern European production. One of my favorite cartoons back in those days was a Czechoslovak movie called “Pat & Mat” (originally, A je to!, Pat a Mat).

It was a cartoon about two friends who tackled different daily issues in every episode. What made the movie interesting was their approach to problem solving. The approach was a combination of innovative thinking and quick improvisations to get to the fastest result as quickly as possible.

Every episode starts with setting the goal and ends in the goal being achieved. The interesting part is the stuff in the middle. The improvisations that Pat and Mat do never stop to amaze the audience, and as soon as you think they are done they manage to pull off yet another astonishing improvisation.

Observing software projects and the way they are delivered often reminds me of the approach that Mat and Pat take. Especially agile projects delivered in iterations referred to as “sprints.”

In these projects we developers, even though highly intelligent people, get so focused in reaching the sprint goal that many times we don’t stop to think and look at the bigger picture and the mess that we are creating while trying to reach the goal quickly. In the race for delivering features we often forget about one of the essential parts of engineering. We forget about quality. We even have a dedicated term for the mess we are making in the effort of delivering fast: “technical debt.” We use this term when we deliver features quickly but with poor quality.

By the time the goal is reached, there is no time to set things straight; a new episode, a new sprint begins, and with the new sprint new adventures start.

Here are several Pat & Mat videos. Enjoy watching technical debt in the making, and try to avoid doing this in your projects.

Original Link

Video sharing app Kuaishou rumored to raise new fund with valuation at $15 billion

China’s leading short social video and photo sharing app Kuaishou is rumored to launch a new round of funding with the estimated valuation at $15 billion, according to the self-media “Kaiqi.” Kuaishou told other local media that it has nothing yet to release.

In March, Kuaishou raised $350 million in its Series D financing led by the Chinese internet conglomerate Tencent, and was valued at around $3 billion. In the previous financing rounds, Kuaishou has pocketed fundings from Sequoia, DCM, and Baidu.

There has been talks in the industry saying that Kuaishou plans to apply for IPO either in the US or Hong Kong this year. TechCrunch in February reported that the popular video-sharing app plans to go public in the US.

Kuaishou has done exceptionally well in China with its easy-to-share video features, especially among users in lower-tier cities. The number of its monthly active users surged from 93.40 million in September 2016 to 183 million in September 2017 with 87 million daily active users, according to a report from Jiguang, a mobile data research firm. The latest figures show that now Kuaishou has 7000 million registered users and sees over 100 million daily active users.

As a front-runner in China’s mobile video sharing sector, Kuaishou allows users to share short video clips or live stream their daily lives, most of which often include eating, shopping or other bizarre performances.

Original Link

Video: We asked startups quirky questions at TechCrunch Shanghai

How would you introduce your startup to an 80-year-old lady? And to your blind date? We asked around at the Startup Alley at TechCrunch Shanghai 2017, where about 200 startups showcased their latest products about AI, IoT, and Fintech, etc.

If you can’t see anything, try QQ video instead.

Timmy Shen
Technology Reporter

Timmy Shen is a technology reporter based in Beijing. He’s passionate about photography, education, food and all things tech. Send tips and feedback to timmyshen@technode.com or follow him on twitter at @timmyhmshen.

Original Link

Video: We talked with startups before they met with VCs at TechCrunch Shanghai

TechCrunch Shanghai 2017 gathered over 110 VCs and 400 entrepreneurs at the VC Meetup, where each startup had 10 minutes to chat with a single VC. Watch what they have to say before meeting the VCs.

If you can’t see anything, try QQ video instead.

Timmy Shen
Technology Reporter

Timmy Shen is a technology reporter based in Beijing. He’s passionate about photography, education, food and all things tech. Send tips and feedback to timmyshen@technode.com or follow him on twitter at @timmyhmshen.

Original Link

DASH playback of AV1 video in Firefox

Bitmovin and Mozilla partner to enable HTML5 AV1 Playback

Bitmovin and Mozilla, both members of the Alliance for Open Media (AOM), are partnering to bring AV1 playback with HTML5 to Firefox as the first browser to play AV1 MPEG-DASH/HLS streams. While the AV1 bitstream is still being finalized, the industry is gearing for fast adoption of the new codec, which promises to be 25-35% more efficient than VP9 and H.265/HEVC.

The AV1 bitstream is set to be finalized by the end of 2017. You may ask – “How does playback work on the bitstream that is not yet finalized?”. Indeed, this is a good question as there are still many things in the bitstream that may change during the current state of the development. However, to make playback possible, we just need to ensure that the encoder and decoder use the same version of the bitstream. Bitmovin and Mozilla agreed on a simple, but for the time being useful, codec string, to ensure compatibility between the version of the bitstream in the Bitmovin AV1 encoder and the AV1 decoder in Mozilla Firefox:

 "av1.experimental.<git hash>"

A test page has been prepared to demonstrate playback of MPEG-DASH test assets encoded in AV1 by the Bitmovin Encoder and played with the Bitmovin HTML5 Player (7.3.0-b7) in the Firefox Nightly browser.

playback demo screenshotAV1 DASH playback demo by Bitmovin and Firefox Nightly. Short film “Tears of Steel” cc-by Blender Foundation.

Visit the demo page at https://demo.bitmovin.com/public/firefox/av1/. You can download Firefox Nightly here to view it.

Bitmovin AV1 End-to-End

The Bitmovin AV1 encoder is based on the AOM specification and scaled on Bitmovin’s cloud native architecture for faster throughput. Earlier this year, the team wrote about  the world’s first AV1 livestream at broadcast quality, which was demoed during NAB 2017 and brought the company the Best of NAB 2017 Award from Streaming Media.

The current state of the AV1 encoder is still far away from delivering reasonable encoding times without extensive tuning to the code base: e.g. it takes about 150 seconds on an off-the-shelf desktop computer to encode one second of video. For this reason, Bitmovin’s ability to provide complete ABR test assets (multiple qualities and resolutions) of high quality in reasonable times was extremely useful for testing of the MPEG-DASH/HLS playback of AV1 in Firefox. (HLS playback of AV1 is not officially supported by Apple, but technically possible of course.) The fast encoding throughput can be achieved thanks to Bitmovin’s flexible cloud native architecture, which allows massive horizontal scaling of a single VoD asset to multiple nodes, as depicted in the following figure. An additional benefit of the scalable architecture is that quality doesn’t need to be compromised for speed, as is often the case with a typical encoding setup.

block diagram of the encoderBitmovin’s scalable video encoder.

The test assets provided by Bitmovin are segmented WebM outputs that can be used with HLS and MPEG-DASH. For the demo page, we decided to go with MPEG-DASH and encode the assets to the following quality levels:

  • 100 kbps, 480×200
  • 200 kbps, 640×266
  • 500 kbps, 1280×532
  • 800 kbps, 1280×532
  • 1 Mbps, 1920×800
  • 2 Mbps, 1920×800
  • 3 Mbps, 1920×800

We used the royalty-free Opus audio codec and encoded with 32 kbps, which provides for a reasonable quality audio stream.

Mozilla Firefox

Firefox has a long history of pioneering open compression technology for audio and video. We added support for the royalty-free Theora video codec a decade ago in our initial implementation of HTML5 video. WebM support followed a few years later. More recently, we were the first browser to support VP9, Opus, and FLAC in the popular MP4 container.

After the success of the Opus audio codec, our research arm has been investing heavily in a next-generation royalty-free video codec. Mozilla’s Daala project has been a test bed for new ideas, approaching video compression in a totally new way. And we’ve been contributing those ideas to the AV1 codec at the IETF and the Alliance for Open Media.

AV1 is a new video compression standard, developed by many contributors through the IETF standards process. This kind of collaboration was part of what made Opus so successful, with contributions from several organizations and open engineering discussions producing a design that was better than the sum of its parts.

While Opus was adopted as a mandatory format for the WebRTC wire protocol, we don’t have a similar mandate for a video codec. Both the royalty-free VP8 and the non-free H.264 codecs are considered part of the baseline. Consensus was blocked on the one side by the desire for a freely-implementable spec and on the other for hardware-supported video compression, which VP8 didn’t have at the time.

Major hardware vendors have been involved with AV1 from the start, which we expect will result in accelerated support being available much sooner.

In April, Bitmovin demonstrated the first live stream using the new AV1 compression technology.

In June, Bitmovin and Mozilla worked together to demonstrate the first playback of AV1 video in a web page, using Bitmovin’s adaptive bitrate video technology. The demo is available now and works with Firefox Nightly.

The codec work is open source. If you’re interested in testing this, you can compile an encoder yourself. The format is still under development, so it’s important to match the version you’re testing with the decoder version in Firefox Nightly. We’ve extended the MediaSource.isTypeSupported api to take a git commit as a qualifier. You can test for this, e.g.:

var container = ‘video/webm’;
var codec = ‘av1.experimental.e87fb2378f01103d5d6e477a4ef6892dc714e614’;
var mimeType = container + ‘; codecs=”’ + codec + ‘“‘;
var supported = MediaSource.isTypeSupported(mimeType);

Then select an alternate resource or display an error if your encoded resource isn’t supported in that particular browser.

Past commit ids we’ve supported are aadbb0251996 and f5bdeac22930.The currently-supported commit id, built with default configure options, is available here. Once the bitstream is stable we will drop this convention and you can just test for codecs=av1 like any other format.

As an example, running this code inside the current page, we can report:

Since the initial demo, we’ve continued to develop AV1, providing feedback from real-world application testing and periodically updating the version we support to take advantage of ongoing improvements. The compression efficiency continues to improve. We hope to stabilize the new format next year and begin deployment across the internet of this exciting new format for video.

Ralph has contributed to media technology and royalty-free codecs for most of his career. Currently he helps maintain the video playback module in Firefox and supports new work in the Rust programming language. In his spare time he enjoys books and early music.

More articles by Ralph Giles…

Martin is responsible for Bitmovin Encoding product strategy, roadmap and development. His team works to enable complex video encoding workflows for global premium media and technology companies like Red Bull Media House and the New York Times. As one of the first employees, Martin led the development of Bitmovin encoding infrastructure, building the world’s first commercial massively scalable encoding service, capable of achieving 100x speeds over realtime. Currently, Martin oversees further development of the Bitmovin Encoding solution, including integration of new technologies, like AV1.

More articles by Martin Smole…

Original Link

Play YouTube Videos in an Android App

In this tutorial, we will learn how to play videos from a YouTube channel in an Android app using the YouTubePlayer API. We will get a channel’s video data in JSON format and parse it into a ListView.

1. Find YouTube Channel ID

Go to any YouTube channel’s homepage and copy the URL of that channel. If we copy the URL of Narendra Modi’s channel, the URL will look like this (the last word after channel/ is the channel id):

URL: https://www.youtube.com/channel/UC1NF71EwP41VdjAU1iXdLkw

Channel Id: UC1NF71EwP41VdjAU1iXdLkw

2. Generate SHA1 Key

When you signup for the YouTubePlayer API on the Google Developer console, you need to generate a SHA1 key. For this, follow these steps:

  1. Go to your Android Project and click on the Gradle icon.
  2. Click on YourProjectName(root) after that. If nothing is showing, click on the refresh icon.
  3. Now go to Tasks -> android -> signingReport. You will find the SHA1 key.

Image title

3. Get the API Key for YouTube Data API v3

Go to Google Developer Console and click on “select a project.” You can choose an existing project or create a new project for Youtube integration.

Search for Youtube Data API v3 choose this API and enable it. Now you will navigate to project dashboard.

Click on “Credentials” then “create credentials” and copy your API key and save it. You can restrict the key by choosing according to your usage and paste the SHA1 key that we generated in step 1 and save it. We are selecting “none,” however, it is always recommended to use restriction.

4. Get Channel Video List in JSON Format

Now go to the Youtube Data API docs and scroll down to find the Authorize and Execute button. For the sake of simplicity, we execute data in Load in APIs Explorer and we navigate to another page.

There are many parameters, but only the part parameter is mandatory. Now put the channel id that we got in the first step in the channelId text field. Hit the Authorize and Execute button. We will get videos list and descriptions of the channel in JSON. Copy the URL that we get from the GET request. You can run this URL in the browser as well by replacing {YOUR_API_KEY}.

Android Integration

We will use the Volley library to parse the JSON data and populate it into a ListView. We have to add it to our build.gradle file of the app.

Now download the YoutubeAndroidPlayerApi.jar file and copy this into the libs folder of your Android project. If the folder does not exist, create a libs folder.

5. Create ChannelActivity Activity

I named it activity_channel. In this activity, we are using ListView to show YouTube videos in a list. The XML code of this file is given below:

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" android:id="@+id/youtube_fragment" tools:context="com.techiesatish.youtubeintegration.ChannelActivity"> <ListView android:layout_width="wrap_content" android:layout_height="match_parent" android:id="@+id/videoList"> </ListView> </LinearLayout>

ChannelActivity.java

We are parsing JSON data and using a custom adapter to supply data in a ListView. You may get an error due to not getting classes; just leave that and follow the next steps to create classes.

package com.techiesatish.youtubeintegration; import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.widget.ListView;
import com.android.volley.DefaultRetryPolicy;
import com.android.volley.Request;
import com.android.volley.RequestQueue;
import com.android.volley.Response;
import com.android.volley.RetryPolicy;
import com.android.volley.VolleyError;
import com.android.volley.toolbox.StringRequest;
import com.android.volley.toolbox.Volley;
import org.json.JSONArray;
import org.json.JSONException;
import org.json.JSONObject;
import java.util.ArrayList; public class ChannelActivity extends AppCompatActivity { ListView lvVideo; ArrayList<VideoDetails> videoDetailsArrayList; CustomListAdapter customListAdapter; String searchName; String TAG="ChannelActivity"; String URL="https://www.googleapis.com/youtube/v3/search?part=snippet&channelId=UC9CYT9gSNLevX5ey2_6CK0Q&maxResults=25&key={Your API KEI}"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_channel); lvVideo=(ListView)findViewById(R.id.videoList); videoDetailsArrayList=new ArrayList<>(); customListAdapter=new CustomListAdapter(ChannelActivity.this,videoDetailsArrayList); showVideo(); } private void showVideo() { RequestQueue requestQueue= Volley.newRequestQueue(getApplicationContext()); StringRequest stringRequest=new StringRequest(Request.Method.GET, URL, new Response.Listener<String>() { @Override public void onResponse(String response) { try { JSONObject jsonObject=new JSONObject(response); JSONArray jsonArray=jsonObject.getJSONArray("items"); for(int i=0;i<jsonArray.length();i++){ JSONObject jsonObject1 = jsonArray.getJSONObject(i); JSONObject jsonVideoId=jsonObject1.getJSONObject("id"); JSONObject jsonsnippet= jsonObject1.getJSONObject("snippet"); JSONObject jsonObjectdefault = jsonsnippet.getJSONObject("thumbnails").getJSONObject("medium"); VideoDetails videoDetails=new VideoDetails(); String videoid=jsonVideoId.getString("videoId"); Log.e(TAG," New Video Id" +videoid); videoDetails.setURL(jsonObjectdefault.getString("url")); videoDetails.setVideoName(jsonsnippet.getString("title")); videoDetails.setVideoDesc(jsonsnippet.getString("description")); videoDetails.setVideoId(videoid); videoDetailsArrayList.add(videoDetails); } lvVideo.setAdapter(customListAdapter); customListAdapter.notifyDataSetChanged(); } catch (JSONException e) { e.printStackTrace(); } } }, new Response.ErrorListener() { @Override public void onErrorResponse(VolleyError error) { error.printStackTrace(); } }); int socketTimeout = 30000; RetryPolicy policy = new DefaultRetryPolicy(socketTimeout, DefaultRetryPolicy.DEFAULT_MAX_RETRIES, DefaultRetryPolicy.DEFAULT_BACKOFF_MULT); stringRequest.setRetryPolicy(policy); requestQueue.add(stringRequest); }
}

6. Create a CustomListAdapter Class

This class extends BaseAdapter and populates data in a ListView. The code is given below:

package com.techiesatish.youtubeintegration; import android.support.v4.app.FragmentManager;
import android.app.Activity;
import android.app.FragmentTransaction;
import android.content.Intent;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.BaseAdapter;
import android.widget.LinearLayout;
import android.widget.TextView;
import com.android.volley.toolbox.ImageLoader;
import com.android.volley.toolbox.NetworkImageView;
import java.util.ArrayList; public class CustomListAdapter extends BaseAdapter { Activity activity; ImageLoader imageLoader = AppController.getInstance().getImageLoader(); private LayoutInflater inflater; ArrayList<VideoDetails> singletons; public CustomListAdapter(Activity activity, ArrayList<VideoDetails> singletons) { this.activity = activity; this.singletons = singletons; } public int getCount() { return this.singletons.size(); } public Object getItem(int i) { return this.singletons.get(i); } public long getItemId(int i) { return (long) i; } public View getView(int i, View convertView, ViewGroup viewGroup) { if (this.inflater == null) { this.inflater = (LayoutInflater) this.activity.getLayoutInflater(); // getSystemService(Context.LAYOUT_INFLATER_SERVICE); } if (convertView == null) { convertView = this.inflater.inflate(R.layout.videolist, null); } if (this.imageLoader == null) { this.imageLoader = AppController.getInstance().getImageLoader(); } NetworkImageView networkImageView = (NetworkImageView) convertView.findViewById(R.id.video_image); final TextView imgtitle = (TextView) convertView.findViewById(R.id.video_title); final TextView imgdesc = (TextView) convertView.findViewById(R.id.video_descriptio); final TextView tvURL=(TextView)convertView.findViewById(R.id.tv_url); final TextView tvVideoID=(TextView)convertView.findViewById(R.id.tv_videoId); ((LinearLayout) convertView.findViewById(R.id.asser)).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Intent intent=new Intent(view.getContext(), VideoActivity.class); intent.putExtra("videoId",tvVideoID.getText().toString()); view.getContext().startActivity(intent); } }); VideoDetails singleton = (VideoDetails) this.singletons.get(i); networkImageView.setImageUrl(singleton.getURL(), this.imageLoader); tvVideoID.setText(singleton.getVideoId()); imgtitle.setText(singleton.getVideoName()); imgdesc.setText(singleton.getVideoDesc()); return convertView; }
}

7. Create an AppController Class

In this class, we are creating an AppController class which is extending the application. We also need to define it in the AndroidManifest file:

package com.techiesatish.youtubeintegration; import android.app.Application;
import android.text.TextUtils; import com.android.volley.Request;
import com.android.volley.RequestQueue;
import com.android.volley.toolbox.ImageLoader;
import com.android.volley.toolbox.Volley; public class AppController extends Application { public static final String TAG = AppController.class.getSimpleName(); private static AppController mInstance; private ImageLoader mImageLoader; private RequestQueue mRequestQueue; public void onCreate() { super.onCreate(); mInstance = this; } public static synchronized AppController getInstance() { AppController appController; synchronized (AppController.class) { appController = mInstance; } return appController; } public RequestQueue getRequestQueue() { if (this.mRequestQueue == null) { this.mRequestQueue = Volley.newRequestQueue(getApplicationContext()); } return this.mRequestQueue; } public ImageLoader getImageLoader() { getRequestQueue(); if (this.mImageLoader == null) { this.mImageLoader = new ImageLoader(this.mRequestQueue, new LruBitmapCache()); } return this.mImageLoader; } public <T> void addToRequestQueue(Request<T> req, String tag) { if (TextUtils.isEmpty(tag)) { tag = TAG; } req.setTag(tag); getRequestQueue().add(req); } public <T> void addToRequestQueue(Request<T> req) { req.setTag(TAG); getRequestQueue().add(req); } public void cancelPendingRequests(Object tag) { if (this.mRequestQueue != null) { this.mRequestQueue.cancelAll(tag); } }
}

8. Create an LRUBitmapCache Class

By using the Volley library, downloading and caching of images is very simple. The code of this class is given below:

package com.techiesatish.youtubeintegration; import android.graphics.Bitmap;
import android.util.LruCache; import com.android.volley.toolbox.ImageLoader; public class LruBitmapCache extends LruCache<String, Bitmap> implements ImageLoader.ImageCache { public static int getDefaultLruCacheSize() { return ((int) (Runtime.getRuntime().maxMemory() / 1024)) / 8; } public LruBitmapCache() { this(getDefaultLruCacheSize()); } public LruBitmapCache(int sizeInKiloBytes) { super(sizeInKiloBytes); } protected int sizeOf(String key, Bitmap value) { return (value.getRowBytes() * value.getHeight()) / 1024; } public Bitmap getBitmap(String url) { return (Bitmap) get(url); } public void putBitmap(String url, Bitmap bitmap) { put(url, bitmap); }
}

9. Create a VideoDetails Class

In this class, we are using setter and getter methods to update and retrieve the values of videos.

package com.techiesatish.youtubeintegration;
public class VideoDetails { String VideoName; String VideoDesc; String URL; String VideoId; public void setVideoName(String VideoName){ this.VideoName=VideoName;
} public String getVideoName(){ return VideoName;
} public void setVideoDesc(String VideoDesc){ this.VideoDesc=VideoDesc;
} public String getVideoDesc(){ return VideoDesc;
} public void setURL(String URL){ this.URL=URL;
} public String getURL(){ return URL;
} public void setVideoId(String VideoId){ this.VideoId=VideoId;
}
public String getVideoId(){ return VideoId;
} }

10. Create a VideoActivity

We will create an activity to show YouTube videos. We are using YoutubePlayerView to show videos. This is the XML file of the activity:

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.techiesatish.youtubeintegration.VideoActivity"> <com.google.android.youtube.player.YouTubePlayerView android:id="@+id/youtubeview" android:layout_width="match_parent" android:layout_height="match_parent"/> </LinearLayout>

VideoActivity.java

This activity extends YouTubeBaseActivity. In this activity, we are getting the video id from ChannelActivity and using the curVideo() method to show the video.

package com.techiesatish.youtubeintegration; import android.app.Activity;
import android.app.AlertDialog;
import android.content.Intent;
import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.util.Config;
import android.util.Log;
import android.widget.Toast; import com.google.android.youtube.player.YouTubeBaseActivity;
import com.google.android.youtube.player.YouTubeInitializationResult;
import com.google.android.youtube.player.YouTubePlayer;
import com.google.android.youtube.player.YouTubePlayerView; public class VideoActivity extends YouTubeBaseActivity implements YouTubePlayer.OnInitializedListener {
YouTubePlayerView youTubePlayerView;
String API_KEY="Your API Key"; private static final int RECOVERY_REQUEST = 1; String TAG="VideoActivity"; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_video); youTubePlayerView=(YouTubePlayerView)findViewById(R.id.youtubeview); youTubePlayerView.initialize(API_KEY, this); } @Override public void onInitializationSuccess(YouTubePlayer.Provider provider, YouTubePlayer youTubePlayer, boolean b) { Bundle bundle = getIntent().getExtras(); String showVideo = bundle.getString("videoId"); Log.e(TAG,"Video" +showVideo); youTubePlayer.cueVideo(showVideo); } @Override public void onInitializationFailure(YouTubePlayer.Provider provider, YouTubeInitializationResult youTubeInitializationResult) { if (youTubeInitializationResult.isUserRecoverableError()) { youTubeInitializationResult.getErrorDialog(this, RECOVERY_REQUEST).show(); } else { Toast.makeText(this, "Error Intializing Youtube Player", Toast.LENGTH_LONG).show(); } } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == RECOVERY_REQUEST) { getYouTubePlayerProvider().initialize(API_KEY, this); } } protected YouTubePlayer.Provider getYouTubePlayerProvider() { return youTubePlayerView; } }

11. Add Internet Permission and Define AppController in the Manifest File

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.techiesatish.youtubeintegration"> <uses-permission android:name="android.permission.INTERNET" /> <application android:name=".AppController" android:allowBackup="true" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:roundIcon="@mipmap/ic_launcher_round" android:supportsRtl="true" android:theme="@style/AppTheme"> <activity android:name=".ChannelActivity"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".VideoActivity"> </activity> </application> </manifest>

Now run this app and enjoy! If you are facing any problems, let me know in the comments. I will be happy to help you. You can download this code from GitHub.

Original Link

Video: This robot, made in India, will soon be crawling on the moon

Ever tried landing a remote-controlled helicopter without crashing it? Now imagine landing a rover to crawl on the moon.

It’s no longer just Elon Musk and Richard Branson with visions of being space entrepreneurs. Startups like India’s TeamIndus have joined the race.

Editing by Steven Millward

(And yes, we’re serious about ethics and transparency. More information here.)

About Sumit

A lover of startups and tech, food and travel, cricket and books. Senior Editor with TIA. Mail me at sumit@techinasia.com or tweet me @chakraberty

Original Link

Cloudinary: More Storage for Image and Video in the Cloud

Thanks to Amit Sharon, Vice President of Customer Success for Cloudinary for sharing the news, and rationale for, the 5X expansion of their free tier in the cloud.

“We’ve always been about helping developers,” said Sharon, “and they need room to experiment, run tests, proofs of concepts. This is especially important for start-ups and teams within companies with limited, or before asking for budget and making a commitment.”

More media is being used with responsive design and more solutions for video. Cloudinary has increased the size of its free tier so developers have an accessible place to offload video and images for optimization.

The trend is moving toward a lot more personalized video:

  • Retailers using Facebook with content from a spokesperson combined with user-generated content (UGC).

  • Video game trailers with the player’s image integrated into the trailer.

  • Real estate website automatically adding logs and contact details to videos uploaded by users/realtors.

Given the response rates for personalized content, we see this type of content growing.

With Cloudinary, there’s no incremental production all of the optimization features are accessible via APIs.

Cloudinary routes traffic in real-time between its partner CDNs for optimal performance based on the geography where the content will be seen.

Amit sees the future of images and video progressing on a number of fronts with more media channels coming online daily. Regardless of the channel, images and video will need to be optimized so clients are able to provide the best user (UX) and customer experience (CX).

“We realize how critical it is to manage content end-to-end and close the loop between marketers, content editors, developers and content delivery, to ensure optimal UX/CX, leading to better engagement and conversion rates,” says Amit.

Original Link

Tour the latest features of the CSS Grid Inspector, July 2017

We began work on a developer tool to help with understanding and using CSS Grid over a year ago. In March, we shipped the first version of a Grid Inspector in the Firefox DevTools along with CSS Grid. Now significant new features are landing in Firefox Nightly. Here’s a tour of what’s arrived in July 2017.

Download Firefox Nightly (if you don’t have it already) to get access to the latest and greatest, and to keep up with the continuing improvements.

Jen Simmons is a Designer and Developer Advocate at Mozilla, specializing in CSS and layout design on the web. She is a member of the CSS Working Group, and speaks at many conferences including An Event Apart, SXSW, Fluent, Smashing Conf and more.

More articles by Jen Simmons…

Original Link

BE Study Guide Series: Spotting Trends and Opportunities in Tech [Video]

Using insights gleaned from the BLACK ENTERPRISE Entrepreneurs Summit, the previous segment of our BE Study Guide Series focused on how small businesses can best leverage technology to support their marketing, finance, and press efforts, and ultimately grow as a company and brand.

Now, we will address how small businesses and startups can utilize current trends in technology to increase their potential success, based on takeaways from the Entrepreneurs Summit panel “Spotting Trends & Opportunities in Tech,” which included Esosa Ighodaro, co-founder and president of COSIGN; James Andrews, CEO of SMASHD Ventures; and Tunisha Walker, senior vice president of Capalino & Company.

Watch the Entire Session Here:

(Source: YouTube, User: Black Enterprise)

Takeaway 1: When considering where technology is headed, The Third Wave, by author Steve Case, is a great reference point.

During the session, Andrews describes each tier mentioned in Case’s book as follows:

“In the first wave, we built the internet. In the second wave, we built the mobile app. The third wave is the idea that we are living in this era of the ‘ubiquitous web.’”

He then poses the question, “How do you solve for tomorrow’s ‘X,’ if you don’t know [what] tomorrow’s X [is]?” Elaborating further, Andrews says that he sees a trend around solving real-world problems, not necessarily only first-world problems in the tech space.

Takeaway 2: The government is catering contracts to minority- and women-owned enterprises.

“It’s surprising that a person that is a minority is not certified,” Walker says.

Takeaway 3: Alternative forms of funding are readily available.

Look for alternative ways to fund your business, and consider applying for grants or using equity crowdfunding.

Takeaway 4: Influencer marketing is a game changer.

“You find people who match the brand’s story, and you target their audiences through social media,” Ighodaro says. This tactic is not only popular with larger conglomerates, but smaller mom-and-pop shops have found success leveraging the power of influencer marketing, as well.

Original Link

  • 1
  • 2