Digital-Transformation-in-Covid-cover-photo-1000x562

Digital Transformation in the Age of Covid-19

Only 16% of companies in the world highly depend on digital technologies to run their businesses, while the rest are hybrid-digital and non-digital models.[i] The COVID-19 pandemic has changed all that and has forced companies to finally embrace digital transformation.

Digital transformation is the process whereby businesses adopt new technologies and fundamentally change the way they deliver services to customers. Namely, digital transformation is how enterprises integrate digital technologies into business operations.

Because of the lockdown policies and work-from-home guidelines from Covid-19, many workforces have had to shift their operations to the digital world in order to maintain stability/profitability. This unprecedented and unpredictable crisis has shown the world just how important technologies are to modern businesses. Enterprises now have to rethink their digital transformation strategies and the pace with which to roll them out.

Woman wearing mask working from home

But the barriers for companies to rapidly overhaul their business environments are many. No clear strategies and budget shortfalls are the two biggest challenges most companies face. Yet many businesses are taking the leap since the pandemic leaves them no other options.

Levi’s is an example of a company that has successfully reduced losses by accelerating their digital transformation. They automated most of their logistical processes, deployed an omni-channel e-commerce and delivery system, and invested heavily in data and artificial intelligence, thereby keeping them afloat.  By strengthening their connection between digital technologies and operations has been beneficial for both customers and employees.

It’s not just retail—factories, hospitals and governments can also take an advantage of transforming digitally. To embrace a data-driven future, the following technologies are the key trends which we’re already seeing help people adapt to the new normal:

icon - video iot

Video IoT

• Video analytics helps to enhance the authentication systems (using facial recognition for touchless entry) • Additional functions are being released, such as mask & temperature detection or crowd density detection to follow Covid-19 regulations.

icon - cybersecurity

Cybersecurity

• As enterprises embrace digital transformation, their data needs to be saved and shared safely. Cybersecurity solutions help organizations to avoid hackers and to ensure the internal and external networks are well guarded.

icon - big data

Big Data

• By transforming unstructured data into structured information, big data optimizes and accelerates data analysis much better than before. Big data solutions can collate disparate information and allow organizations to make more informed decisions.

During this challenging time, companies should stay open to digital transformation. Not only the pandemic but also the development of Edge AI/computing are pushing companies to speed up their plan of digital transformation. Gorilla who has specialized in edge computing can really help organizations easily transition and make operations smoother in the long-run.

Click here to learn more about Gorilla’s post-pandemic solutions:  https://www.gorilla-technology.com/IVAR/Post-Pandemic-Area-Management

 

[i] Harvard Business review

https://hbr.org/resources/pdfs/comm/microsoft/Competingin2020.pdf

Video analytics to find someone in a busy station.

Video Analytics Technology

What is Video Analytics?

Picture this, you’re in a crowded train station and have lost your friend in the mix. How do you and your brain go about picking your friend out of the crowd? Do you go through the same process each time you look for something or does it depend somewhat on what you’re seeking? From a human perspective, looking for stuff seems rather straight forward and although we can describe those processes easily to others, the way we search for and identify things generally differs and depends on what we’re searching for. How one goes about finding a lost friend in a station is different than searching for your keys before going to the office.

Video analytics to find someone in a busy station.

Now imagine how we might go about getting computers to look for things. They would need some kind of input to detect specific objects, recognize and differentiate between those, and then notify us somehow when the requested result is found or not. This process is what we call intelligent video analytics, IVA for short.

This article will go into how different kinds of IVA work and also give some examples along the way.

What Video Analytics Does

The processes involved in getting IVA output from software is similar to how people visually detect and recognize things. The essence of what video analytics does is generally described in three steps.

    • Video analytic software breaks down video signals into frames. This article will not describe this step, but understanding digital video and how it works is an interesting topic and good to know before we break down the next steps.
    • The software then splits the video (frames) into video data and analytic data, then uses algorithms to process the analytic data to output specifically desired functions.
    • And finally, it delivers the result in a predetermined manner.
Working through the process.

Approaches to Video Analytic Processing

Let’s get into the details for number two from the above list as it’s what most people have been talking about recently.

Depending on the goal, video needs to be processed using different methods in order to deliver relevant results. Gorilla has categorized the most widely used types of analytics into five fundamental IVA groups which are described in more detail below.

1. Behavior Analytics

These analytics use algorithms that are designed to look for a specific behavior.

Thinking more deeply, a behavior could be defined as action over time. With that in mind, each Behavior Analytic needs more than one frame from the video to determine if an event or behavior has occurred. So it follows that the algorithms in Behavior Analytics look for changes from frame to frame over time to identify a very specific and predefined event or action. We’ve broken down and classified the Behavior Analytics that are used in our solutions here:

People Counting

People Counting Video Analytics

The People Counting IVA does just that, it detects and counts people for a specified amount of time as they enter a zone and/or cross a line which users define in the software.

Line Crossing

Line Crossing Video Analytics

This IVA detects when people cross a line (or lines) of user defined length and position.

Intrusion Detection

Intrusion Detection Video Analytics

Intrusion Detection monitors user created zones to detect any activity or entries by moving objects (like people).

Direction Detection

Direction Detection Video Analytics

This IVA monitors a user created zone for people moving A) within the zone AND B) in the marked direction. Movements in the opposite direction do not trigger an alert.

Direction Violation Detection

Direction Violation Detection Video Analytics

Same as the direction detection IVA but detects and alerts to movements in the opposite direction. As an example, security checks at airports and other transportation hubs stand to benefit from this type of IVA.

Loitering Detection

Loitering Video Analytics

The Loitering Detection IVA monitors figures or people entering and then remaining in a user created zone for a specified period.

2. People/Face Recognition

People and Face Recognition could easily be sliced into two core groups, but we keep them as one since they are so closely related. As Behavior Analytics need to detect human shapes to perform effectively, People/Face Recognition IVAs are next on our list.

Human Detection

Human Detection Video Analytics

The Human Detection IVA detects human figures within the video. Once detected, features like clothing color, gender, eyewear, masks, and age group can be detected as well.

Face Recognition

Face Recognition Video Analytics

This IVA recognizes and identifies faces. This is used in conjunction with Gorilla’s BAP software and its facial recognition database. While uses for this are myriad (and often in the news), we most often see Face Recognition used for Watch Lists, VIP identification, Attendance Systems, and Black Lists.

3. License Plate Recognition

Some people collect license plates and like them because different places have different plates. However, this variety makes it incredibly difficult for one License Plate Recognition (LPR) IVA to work globally (or even just from state to state). Currently, we generally see this IVA added as a customized feature because adding all the different and beautiful plates into the general release of the software would require too much space.

License Plate Recognition Video Analytics

Having said that, there are currently two approaches to LPR.

      • Parking LPR detects the license plates of parked vehicles in user created zones, vehicles travelling slowly, or vehicles stopped at boom gates.
      • Road Traffic LPR detects the license plates of moving vehicles, or vehicles stopped at a stoplight.

4. Object Recognition

Object Recognition Video Analytics

Replace the Face Recognition IVA with any given object and you’ll get the Object Detection IVA. This is where algorithms are used in training the software to detect and recognize a specific object, like a hot dog. There are a lot of different objects in the world (even more than there are license plate types!), so the training and size requirements add up quickly.

5. Business Intelligence

Dashboards in software showing data about various business activities are a valuable asset in just about any retail or enterprise setting. Using video analytics from within a dashboard to enrich and increase results should be a part in everyone’s toolbox.

While the IVAs in numbers one through four above are widely used for surveillance scenarios, there are a magnitude of business scenarios that can reap the benefits that video analytics offers. To see some great examples, check out how Gorilla is applying these to create intelligent solutions for multiple business markets and industries.

Putting the Video Analytic Idea Together

As you read above, these IVAs all orchestrate various algorithms to achieve and deliver results. Essentially though, IVAs detect for and determine if a defined event or behavior has been found or occurs within a video camera’s field of view and then notifies the designated user of the finding.

In a similar manner, most of us go through varying processes depending on if we’re searching for keys at home or for our friend in a busy station.

Video Analytic Processing Power

Thinking about the entire process, could there be a single solution that can do everything effectively? It seems like an insurmountable amount of tasks: from processing each single frame’s analytic data to displaying it together with the video, into creating a complete video system with an array of user selectable & customizable IVA in a building or any other scenario, all the way to putting multiple systems together that report back to a central control center.

It’s not impossible. To demonstrate this, let’s look at what IVAR™ from Gorilla can do and how it operates.

CPU and GPU Video Analytic Processing

Video analytics as a whole requires a lot of dedicated processing power. We should keep in mind here that before optimization and edge devices with capable CPUs, video analytics was processing both video and analytic data on one machine and required additional GPUs to do most of the work. Technology and the ability to split these two up has advanced to the point that it’s now possible to keep the video data at the edge while pushing the analytic data up the network for quick processing.

One technology, which Gorilla was the first to adopt, is the Intel® distribution of the OpenVINO™ toolkit. Using the OpenVINO™ toolkit to optimize IVAR keeps deployment and upkeep costs low while decreasing operating temperatures by minimizing the need for expensive GPUs.

Delivering and Deploying Video Analytics

Considering the multitude of IVA capabilities and applications in the world today, Gorilla is asked about many things regarding delivering and deploying video analytics and the IVAR platform.

Q: How many video feeds can IVAR handle?

A: IVAR is a highly scalable solution that fits nearly any size system, from a single camera with one IVA to multiple systems with hundreds of cameras running multiple IVAs.

Q: I need a complete VMS with integrated IVA, is IVAR right for my company?

A: From using it as a standalone all-in-one video surveillance solution, to integrating via IVAR’s open API, to adding it to an existing Milestone Xprotect® system, IVAR excels at being versatile in suiting your needs.

IVAR Edge AI Video Analytics Dashboard

Final Thoughts on Video Analytics

The next time you find yourself in a crowded station and need to locate a missing friend (which is hopefully never), think of how a computer attached to a camera might go about doing it. The way that video analytics works is an incredibly interesting and broad topic to cover in one article.  If you made it this far in the article, you should now have a solid understanding of how video analytics operates and how video analytic software solutions like IVAR are driving technology forward.

Further Reading on Video Analytics

To read more about edge AI, click here

For more info on IVAR, click here

You’re also welcome to contact us here at any time: Contact Us

If you liked this article, why not share it or leave a comment below? We love conversation and talking about our tech!

smart-transportation-traffic-intersections-1000x562

Smart Transportation – Traffic Intersections

Improving the Intersection

Look both ways before crossing – we all learn this when we’re young and pass it on to our children when they start going out in the world. The traffic signal and crosswalk are fundamental pieces in creating and encouraging orderly interactions between motorists and pedestrians – and as safe as they are, there’s always room for improvement.

In this article we’ll give a brief history of the intersection, talk about some safety problems it faces, and demonstrate a solution that’s already showing massive results.

Intersections Are an Evolving Technology

Road intersections have been a part of society for thousands of years, yet traffic signals and crosswalks were introduced less than a hundred years ago.

While most would assume that the introduction of these came with the invention of the automobile, they were actually introduced a bit before cars rolled off the assembly line. It is believed that the first ticket for running a red light was issued just minutes after the first traffic light was turned on.

It should be a no brainer that vehicles should stop at red lights and pedestrians keep to crosswalks. However, that idea gets mixed-up when we see motorcycles or scooters cutting corners by using crosswalks and also when pedestrians start to cross before they get the green light.

However ubiquitous traffic violations are in today’s dense urban environments, it still begs the question – what are their root causes? Is it a failure of urban design to integrate traffic flow, a lack of law enforcement, poor dissemination of road rules, impatient or inconsiderate drivers, or a combination of these?

Cities Need Smarter Solutions

With budget constraints and limited staff on hand to cover their thousands of intersections, cities world-wide need out of the box solutions to improve traveler safety and remind everyone that traffic laws are in place for good reason.

Enter the Smart City

We see news about them nearly every day, but what is a Smart City? Simply put, a Smart City uses different types of IoT devices and technologies to collect data and then use insights from that data to manage assets, resources, and services more efficiently than had they not been implemented. It follows that the room for improvement on systems currently in place is incredibly broad and deserves further classification and a closer look as well as a re-think of those systems.

These IoT sensors and data gathering devices we talk about in regards to the smart city almost always fall into the category of edge devices and some of these perform some of the actual work involved in processing the data gathered.

A Place for Video Analytics and Edge AI

Edge AI, when applied to video data, can turn the unstructured data into actionable and understandable information. Considering systems that might be able to benefit from the application of Smart City concepts, let’s look at our earlier example from the top – the traffic intersection.

smart city traffic

Creating A Smart Intersection

How much would it cost and would it even be possible to have a traffic officer at each intersection, 24/7, and have them log each traffic violation with information like time of day, license plate number, and vehicle make & color? And also have them raise a sign with driver info and a warning each time it happened, and report it all back to HQ as each infraction occurred?

The answer here is obvious, it would take an entire team with specific skills to accomplish this for all intersections and to achieve 24/7 coverage. This is the definition of a fool’s errand. It’s impossible to achieve this with only ‘feet on the ground’.

Now let’s consider a case in which we use video cameras with AI to do all of this. It not only seems attainable, but also very cost efficient.

camera locations

Proving the Edge AI Concept

In a recently published case about how video analytics and edge AI have drastically improved intersection safety and decreased infractions, solution deployment and use proved to deliver a 900% increase in identifying traffic infractions at an intersection in Taiwan – click the button below for the full story:

It’s important to note that this was all accomplished with existing and available technologies – the future is truly here and is helping law enforcement and overall safety in many public settings.

The Smart City Roll-Out

With more interest in creating new systems and updating older ones to curb traffic violations and increase public safety, companies like Gorilla, which have solutions that can deploy today and also make budget minded sense are poised to take the market lead as cities across the globe continue to roll out solutions.

Thoughts and Further Reading

The instruction to look both ways before crossing should never stop being taught, but we can now cross our streets more confidently with continued improvements in public safety using edge AI solutions to build up Smart City solutions.

To read more about edge AI, click here

For more info on Smart City solutions, click here

You’re also very welcome to contact us at anytime here: Contact Us

If you liked this article, why not share it or leave a comment below? We love conversation and talking about our tech!

two-people-on-mountain-cliff-1000x562

Edge AI Technology

Exploring Technology and Terms: Edge Computing, Edge AI, IoT

We hear these words all over the place: edge computing, edge AI, IoT, video analytics, etc. Every day in tech company event pages, articles from news outlets, personal blog posts, social media blurbs, even images on photo sites are littered with so many tech terms and acronyms it’s a wonder that any one person could possibly understand what they all mean.

It’s understandable that as time and technologies progress, tech companies fragment and are creating an unfathomable sea of niche solutions. While this happens, being able to concisely describe this new tech is imperative to community understanding and of course, the bottom line of making sales.

In this article, we’ll explore and describe some of the terms that directly relate to what we do here at Gorilla. The reason we’re writing this is simple—Gorilla believes giving everyone a solid understanding of certain tech-speak through education and discussion is vital to progress and growth.

Ideas Behind Edge Computing and Edge AI

It’s an incredibly interesting idea that words have histories of their own, that there are different types of dictionaries, and that language is what we as a society make it. With that in mind, let’s go into what these words look like when they’re alone and then when they’re put together.

Edge

This one’s pretty straight forward as its history isn’t based in tech, but what does it mean when we apply it to technology and computer systems?

Edge, when talking about tech things, is generally referring to devices which provide an entry point for data from the real world to go into a digital system or network.

Let’s go over ideas like computing and AI before we get into what it means when we attach the word ‘edge’ in front of them.

Computing

We, as a society, have been using compute and its conjugated forms for about four hundred years and like many words from around then, it comes from Latin [com ‘with, together’ and putare ‘to reckon’].

It follows that compute means to count or sum up, which is exactly what computers do. Adding the -ing to compute shows us that the computer or device is actively doing that, and in a lot of cases they are computing 24/7.

AI

It’s fairly safe to say that more than half of the world understands that the acronym AI means Artificial Intelligence.

AI is defined as the capacity of machines (usually computers) to exhibit intelligent behavior. The phrase was coined in 1956 and the ‘artificial’ part of it was meant to emphasize the belief that human intelligence is real and machine intelligence is not. Over the course of its (relatively short) life, AI has segmented into categories in which the intelligence side is different, e.g. machine learning, deep learning, computer vision, facial recognition, behavior analysis, etc… But that’s for a different article. When all is said and done, AI is software that runs on hardware.

Now that we have a pretty good (albeit basic) foundation of edge, computing, and AI, let’s put these together and look at edge computing and edge AI.

Edge Computing

So we now know that edge devices are near to or act as the entry point for data into a network or system.

Before edge computing came along, these devices were conduits which sent data gathered to the central network or systems to process/compute. It’s logical and correct to say that edge computing means that a device on the edge is tasked with doing some amount of the data processing itself before that data gets passed to that central system.

Edge AI

Keeping this logic train rolling and to put it simply, edge AI falls within the idea of edge computing. The key difference is that we’re defining what kind of processing happens in the edge device. Hint: It’s AI processing.

We know not only what edge, AI, and computing are, now we know what edge computing and edge AI are. From here, let’s explore edge AI and see a little bit of how that relates to the real world.

Regarding IoT and Edge AI

Edge devices and the Internet of Things (IoT) make up a network of physical devices and sensors which are intertwined with our personal, commercial, and public infrastructures.

Edge AI and IoT devices are ubiquitous in our daily life. This is by no means an exhaustive list, but a few devices that can be categorized as edge or IoT are: computers, tablets, smartphones, surveillance equipment, access control panels, POS systems, display devices, RFID tags, iBeacon beacons, Bluetooth sensors, WiFi sensors, Zigbee sensors, Z-Wave sensors, BACnet objects, and more.

What's the importance of the two?

Edge AI and IoT help many industries become more efficient and safer by improving accuracy and reducing human error through automation. The banking, education, healthcare, hospitality, manufacturing, retail, security, and public service industries are all tapping the potential of these devices to improve our day-to-day lives. Categorizing these into groups helps companies involved in the creation and deployment of edge AI and IoT have a deeper understanding of how people and society might use their solutions. Click on any of the four groups below to read more on specific and real-world uses.

Let’s Talk Analytic Data

The devices described above contain sensors that collect structured and unstructured data.

Structured data, such as GPS coordinates, motion detection, and temperature, is easily organized and acted upon by computers.

Unstructured data, such as video and images, is not easily classified or understood by computers. This is the kind of data that most edge AI is focused on processing. Let’s take a closer look at how that happens.

Data is transmitted along with the time of collection to form an event. Those events are then sent to edge computing devices with Gorilla IVAR® or other​ computing solutions for preprocessing and then forwarded for analysis in public, private, or hybrid servers. Here, the unstructured video and image data is transformed into structured data via deep learning. Events are stored in software-defined storage and correlated and categorized for use in biometric authentication, account management, device management, business intelligence, and more.

Gorilla Edge AI Technology is Different

Gorilla was the first to offer high-performance video analysis optimized by the Intel® OpenVINO™ toolkit to take edge computing & IoT data and business intelligence into the age of machine learning. The advent of this technology allows edge devices to do far more efficient computing at the edge without needing GPUs.

Edge AI Computing

As we discovered earlier in this article, edge AI can mean many things. A perfect example of this is how IVAR uses a combination of machine learning and deep learning through customizing MobileNet SSD, ResNet and more to provide various and highly accurate analytics in edge computing.

Intelligent Video Analytics

Drawing out event information from a video camera’s live feed is the goal of many video analytic solutions. How that is achieved is a multi-step process. Gorilla uses multiple AI algorithms collaborating to deliver intelligent video analytics (IVA) that transform data into actionable, real-time insights with people, vehicle, and object identification capabilities.

While some edge AI video analytic solutions require use of specific or proprietary camera brands, edge devices, or expensive GPUs, Gorilla’s edge AI technologies like we see in IVAR can process video from various sources or archived video and operate efficiently on small systems or large systems.

Recap and Thoughts on Edge AI

To the uninitiated, computing on the edge and implementing AI and computer vision sounds like a daunting feat. After exploring what these terms really mean and how they are used, it’s really not intimidating at all.

Edge computing puts these millions of devices that are already in place to good work. Efficient edge AI computing has direct impacts on deployment costs and maintaining balanced network processing. These solutions, as we see in what Gorilla has available, are driving automation and revealing insights to human operators. Using machines to handle repetitive chains of events (which require immediate responses) give us the ability to see patterns hidden in human behavior.

All in all, the reach, possibilities, and power of edge AI computing like video analytics are well at hand.

Further Reading on Edge AI

For deeper insight into the tech and to explore some examples in use today, check out our whitepapers, read some of our case studies, or contact us directly for more information.