In this post, we'll use IEX Cloud OAuth to apply back-end authentication, along with front-end token extraction.
In this post, we'll hear from several new hires about some of the benefits of joining Expero and what to expect.
Increasing the accuracy of analytics technologies drives modern business, and getting more realistic results applicable to on-the-ground business operational conditions means orgs can make better decisions faster.
Blog #1 in a series that will highlight the aspects of Pulsar that make it an attractive prospect for your messaging and data streaming needs.
Serverless ML, ML on AWS Lambda, ML on Google Cloud Functions, Scalable Serverless ML, Classify your dog for less than a penny!
Fist impressions of Astra, the new DBaaS from DataStax.
Netflix open-sourced Metaflow for performing data science and machine learning on cloud providers such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud (GCP) - although optimized for AWS. What features does it provide?
Advances in automation of data analysis, graph analytics, data lineage, conversational queries and IoT analytics are widely predicted for 2020. Expero has capabilities and assets already in place.
Deep dimensionality exchange is a new deep learning protocol developed at Expero for performing domain transfer between domains of different dimensions.
Natural language processing is a powerful tool in building investigatory user interfaces for data exploration. See how conversation UI can be used to interrogate highly connected data to produce context driven data visualization and analytics.
Learn how Software Craftsmanship powers up Agile methods to move fast in complex projects.
JanusGraph tuning and performance best practices
Data products are productized versions of data science and machine learning initiatives that deliver value to end-users.
With Confluent and TigerGraph quickly emerging as high-quality enterprise software, learn how you can take your LDAP data, RBACs, and ACLs and quickly model and mirror them in a graph database using Kafka, a real-time streaming software.
Use Machine Learning to increase volume, maintain profits and meet the needs of customers while maintaining a simple business policy.
Given a large batch of healthcare data, we efficiently find similar patients to determine what remedies to recommend using traditional search methods and graph algorithms.
Learn how to avoid bugs and gain confidence when refactoring code by writing tests for your React code using Jest, React Testing Library and a Test Driven Development approach.
What makes Svelte a different UI framework and why you should give it a try. In this article you will learn the benefits of using Svelte, the new (and different) UI framework, as opposed to others like React, and Vue.
System Integration sourcing data for Graph Analysis.
Graph machine learning (graphML) is a subset of deep learning with much higher accuracy because big data records are linked together by their relationships.
Globalization (G11n) of an application involves more than just translating text. Internationalization (I18n) is the process of enabling your application to be used in different languages and culture. Localization (L10n) covers the work to provide the application in one specific language and culture. Selected locales can help in providing translated text, but some information needs to be converted (times, dates, currencies).
#NLP #MachineLearning #algorithm learns to tell #stories by summarizing #commercial #RealEstate #data, #earning #profits and spurring #CustomerRetention. #BINGO!
When localizing an application, treat the capabilities as features. Consider the specific use cases and work with the users to refine the approach. There may be design and layout adjustments needed per language. If the application is a CMS, content as well as application resources may need translations.
See why accumulators help TigerGraph’s GSQL query language standout among other native Graph Databases.
Reinforcement Learning at scale; Scheduling thousands of vehicles in multiple environments - how we made it work.
This can be like the one sentence description, but the more buzzwords the better. This is what google will pull from when people search.
Learn how to protect your resources by setting up serverless OAuth authentication with Auth0 and AWS Lambda@Edge.
Cassandra, DataStax, ScyllaDB, CosmosDB, RedShift - they all scale horizontally and with you will pay more as you add nodes - even if the SW is free, the compute time is not. Use Gatling to forecast your budget needs so you don’t surprise your CFO.
Digital Twins are the next logical step in an IoT implementation. Data storage for a digital twin can include a property graph database such as JanusGraph or DSE Graph, a time series database like TimescaleDB, and an analytical database like Redshift, Google Bigquery, or HP Vertica.
Building a data ingestion pipeline using Spark, Kafka, DataStax, Nifi, and Pentaho.
Deep learning, specifically recurrent neural networks, forecast nonlinear time series illness signals.
Wireframes are intended to call out key moments and interactions in software design in order to provide clarity into how something should look, feel, and function.
As of late Q2 of 2018, there’s a new entry into the graph database marketplace, Amazon Neptune.
Using deep machine learning recommender system technology, Expero detects health care opioid fraud.
A sneak peak of this year's Graph Day San Francisco talk on ACID & integrating JanusGraph with FoundationDB.
The graph database space is rapidly expanding as more and more companies identify potential use cases that require the traversal of highly connected network and hierarchical data sets in ways that are cumbersome with RDBMSs and NoSQL solutions.
Follow these tips to speed up your JanusGraph queries when running against a variety of storage backends including Apache Cassandra, ScyllaDB, Apache HBase, and Google Cloud Bigtable.
Single precision vs Double precision on a CPU vs GPU in high performance computing simulation.
Reinforcement Learning of a deep neural network has been applied to the problem of supply chain logistics: In a stochastic environment, how to optimize pickup and delivery schedules.
Graph machine learning finds dissatisfied customer cohorts an recommends optimal intervention measures.
So many options in fast moving the graph database world, which one should you choose?
Money laundering and credit card fraud detection by graph machine learning.
The property graph database space has been dominated by a handful of names who on balance are not that big in the software marketplace generally speaking.
Graph convolutional networks exhibit optimal deep learning on big graph data to gain business insight.
Explore your tuning options for increasing JanusGraph write throughput and lowering latencies.
Machine learning entity resolution deduplication of FBI criminal records using supervised learning logistic regression and unsupervised learning clustering.
Messages coming into your Spark stream processor may not arrive in the order you expect. Learn how to handle the unexpected with Spark, Databricks, and JanusGraph, DataStax, Neo4j, or Microsoft Cosmos DB.
Neo4j, DataStax Graph and Janus property graph schema design decisions for vertex and edge definitions.
Connecting to Microsoft Cosmos DB with Apache TinkerPop Dropwizard.
Deep-learning convolutional generative adversarial neural network crushes web design.
Graphs and graph datasets are rich data structures that can be used uniquely to improve the accuracy and effectiveness of machine learning workflows. Some of the key interactions are graph analytics as features, semi supervised learning, graph based deep learning, and machine learning approaches to hard graph problems.
Learn how Graph Technology can help to identify risk and fraud patterns in order to quickly respond. Many new fraud rings use sophisticated measures for credit card and other methods of fraud. Utilizing Graph technology will allow you to see beyond individual data points and uncover difficult-to-detect patterns. Join us to learn how to maximize time and resources with Graph technology vs. traditional relational database platforms.
Dave and Ted discuss Graphday and their thoughts on DSE, Neo4j, Microsoft's CosmosDB, and JanusGraph.
In this post, we're going to dive into the client-side single-page application, commonly abbreviated as “SPA”. What is considered an SPA? What are important choices to be made when building one? How do you deploy it? When is an SPA a good choice or a bad choice?
The next generation of React, aka Fiber, is eagerly anticipated. Expero's front-end team chimes in with their first impressions. If you’re like us, you’re eagerly awaiting the release of the new version of React (commonly referred to as React Fiber). We don’t intend to comprehensively go into the differences between React Fiber and the current React architecture (code named React Stack). However, when upgrading React, explicitly deprecated features tend to be pretty straightforward and easily called out with tooling like eslint. Still, some changes can be more insidious as they may have side effects that will be difficult to spot or reliably reproduce.
In this blog post, I'll discuss the process of building a micro service that is backed by a graph database and the technologies leveraged to accomplish it. I'll be building this microservice in Java using Maven for its declarative dependency management and build process and Dropwizard for its straightforward architecture and configuration, and then connect everything up to an Apache Tinkerpop enabled Graph Database.
Trying to modernize monolithic legacy applications is hard: these applications are core drivers of the business and the risk of messing them up is too great. However, as time goes on, the cost of maintaining these monoliths grows.
We get asked that question a lot given our early customer work with Titan evaluations, participation in the JanusGraph project and usage of Apache TinkerPop while concurrently being a premier DataStax Graph partner.
DataStax released 5.1, Neo4j released 3.2, Microsoft announces CosmosDB; there’s a lot of stuff happening in the graph database world. Looks like a prime time for some Gremlin training.
Software and web developers often wear many hats, including the UX/UI hat. But some developers lack the knowledge to design UIs or to collaborate effectively with UX designers and researchers.
As a user researcher, I’m always inclined to say, “Test everything, all the time!” when people ask, “What/when/how should we validate with users?” That’s my pie in the sky: the place where there’s all the time and all the budget in the world to get every last detail or spec just right for the good of the user, the product, and, ultimately, the business. But that’s not real life. Projects run on strict budgets and tight timelines, and there’s not always a lot of wiggle room.
Do you remember the first time you saw a commercial about “the Cloud?” That was one of the pivotal moments for technology buzzwords going mainstream. It’s been a nonstop thrill ride since then: Web 2.0. Internet of Things. Big Data. Machine Learning. Like “the Cloud,” the term “machine learning” is thrown around a lot, but it’s not entirely clear who it is useful to. People who follow it are aware that machine learning techniques were used by Google to create an unstoppable Go playing machine, and that it allows computers to drive with abilities getting closer to human drivers by the day.
Webinar presentation on how to combine Graph databases with an iterative, user-driven approach and the latest UI tools to enable your team to start creating valuable user experiences with graph data.
This year at the Data Day 2017 conference in Austin,TX, keynote speaker Emil Eifrem declared 2017 the Year of Graph. Graph data storage certainly is becoming more mainstream, with a myriad of both commercial and open-source options currently available and maturing at an accelerated pace. But so what? Why should user experience practitioners, or anyone else that is not a database administrator, care about this trend in data storage technology?
What does the term “search” mean to you?
A quick recipe for bringing up your first web worker with Webpack in the mix and share some tips to keep your implementation clean.
Web application types include static website, traditional server-side rendering, client-side single-page application, and isomorphic single-page applications.
Product owners and stakeholders have a tendency to skip over discovery research and go straight to design—and then skip over validation research and go right to release. One of the main drivers behind this tendency is the fact that looking at designs is fun. Looking at numbers and bulleted lists of findings is not (as much) fun (for stakeholders). With designs, they get to see their product progressing and growing from inception to build. Data is more behind-the-scenes; it may drive design, but so what?
What are the next big trends in UX? At our recent Expero Summit, we discussed many advances that promise to transform how users interact with technologies. As augmented reality and other technologies take substantive form, it’s more and more about what the user needs from these amazing technologies and less about how cool the technology actually is. It’s a given that the technology is only going to get cooler. What’s not as obvious is whether the user is ready for it.
One of the major hesitations from product stakeholders regarding end-user engagement, specifically user testing, is that they often don’t want anyone to see it till it’s “perfect” or “ready” or “MVP.”
A supernode is a vertex with a large number of edges. In graph theory, these high-degree vertices are known as hubs.
When designing and developing software, it is critical to take into account the limitations of the technology employed, especially hardware—things like computers, boxes and other physical devices. But there’s another aspect to hardware that should be taken into account and is often overlooked: the user.
OrientDB is one of several popular graph data stores on the market today. It provides a multi-model approach with the powerful nature of a graph database and the flexibility of a document data store. If you have decided to build out your multi-tenant application on top of OrientDB, you are in luck as it has several built-in, out-of-the-box methods for handling multi-tenancy.
How do you handle customer #2? You delivered an MVP of some hosted software for customer #1. Your brother-in-law knows a guy who has a similar problem and after a lunch meeting, now you need to add customer #2 to your incubating SaaS tool. Of course customer #1 and customer #2 shouldn’t be able to see each other’s data, but you don’t necessarily want to install and configure everything all over again just because you added another customer.
In one of the projects Expero worked on several years ago, the client chose to build their own custom authentication solution. For three weeks, one developer’s status at the scrum every morning was “security.” It took that competent developer several weeks to get a very basic custom solution in place. Additionally, that solution didn’t even have integration with other identity providers or any other bells and whistles! You can easily double that estimate if you want even a few providers and a user interface that doesn’t look drab.
Neo4j is the most popular graph data store available today. It leverages graph technologies to help build modern high-performing applications, but it does not have any native multi-tenant support. However, you may have decided to build out your multi-tenant application and that Neo4j is the right graph data store to fit your needs. In any multi-tenant system, the trick (from a data-store side) really comes down to how to isolate one tenant’s data (physically or logically) from another tenant’s.
So you’re going to build a multi-tenant application and now it’s up to you to figure out how to make it all work. Ask any software engineer who has built one and they will tell you that multi-tenant applications are inherently more complicated than single-tenant applications. That complexity comes from the added overhead required to ensure that your tenants’ data are secured and isolated from one another (e.g., Tenant 1 can’t see Tenant 2’s customer list) and that large tenants don’t adversely affect other tenants in the same environment (e.g., Tenant 1 does not use all the resources, thereby slowing the performance for Tenant 2). The overhead caused by these requirements may take the form of either operational or developmental complexity, but the key to building an effective system in any multi-tenant scenario is to reduce that complexity.
Among the product announcements from AWS re:Invent 2016, a new triplet of production web services has emerged under the heady title of Artificial Intelligence.
Whether it’s a stand-alone service, a microservice or a back end for a web application, a consistent and robust API is no longer something to be handcrafted. Using an API framework likeSwagger promises to give you a leg up and help you build a robust API for less cost.
Visual design is about more than keeping up with popular design trends, especially for complex apps
We all know how awesome user personas are. They help all the king’s men—designers, researchers, product owners, stakeholders, investors, on and on—understand a particular user type’s behaviors, needs, goals and motivations.
Our spin on how UX teams have continued to tailor our UX design principles to the product development process.
DataStax Enterprise 5.0 contains much more than a version bump of its component pieces. In particular, DSE 5.0 includes the new graph database, DSE Graph. DSE Graph is an Apache TinkerPop enabled distributed property graph database that uses Apache Cassandra for storage, DSE Search for full text and geospatial search, and DSE Analytics for analytical graph processing.
As Agile principles and Lean methodologies continue to take center stage in product management and strategy, it’s easy to get caught up in daily scrums and design iterations and shoot right past the user research (UR).
If you know anything about Expero, you know we specialize in solving “complex problems.” This means we’re not working on your average brochure website or e-commerce app. We’re tackling apps and softwares targeted to niche domains with expert end-users who have very specific needs and goals to solve their very complicated problems.
Using Google gRPC and Apache Cassandra to create a high-performance data pipeline for storing time series data that comes from IoT-style applications.
Over the past few months I got my first chance to work with Kafka Streams. For those who don’t know, Kafka Streams is a new feature in Kafka 0.10.0 that adds stream processing capabilities into the existing Kafka platform.
Josh Perryman visited the Northwest in July 2016 for DataDay Seattle and gave an updated version of my Graph Database Shootout.
How can you retool your SQL talent to deploy on your graph project? According to Google Trends, interest in graph databases is increasing while SQL interest is level or even decreasing – but SQL searches are still 43 times more prevalent than graph.
Achieving user adoption goals for enterprise software can be challenging.
The Austin Data Geeks & Austin Graphs Meet-ups invited Josh Perryman to do an encore presentation of our GraphDay presentation: Graph Database Engine Shoot-out.
A case study at Graph Day recounting a client study we did to see whether their database could be reorganized to offer improved query performance. We looked at graph databases (OrientDB, Titan, Neo4J) because they thought of their data as graph data, and relational (Postgresql) because that’s what their database already implemented.
We are in an era of unprecedented innovation in databases. Data-intensive companies are grappling with whether the many new options — NoSQL, Key-Value, Document, Column Family, Column-Oriented — are appropriate for them. The commercial success of Facebook and LinkedIn makes graph databases a hot area of investigation. Unlike many new databases, they are not a variation on or a simplification of relational databases. Instead they require new ways of thinking and modeling data. In return they can answer truly novel questions.
In this post we are going to explore a slightly more complicated yet common scenario involving continuous deployment of a .NET web application.
In this post we summarize what sorts of questions we feel like a proof of concept project around graph databases can answer, and how we typically tackle them.
We are meeting more people who are interested in looking into the world of graph databases. Palladium has executed proofs–of–concept for clients to help them explore this world. In this post we summarize what sorts of questions we feel like a proof of concept project can answer, and how we typically tackle them. For our presentation at Graph Day, we’ll be walking through one in particular, but really there are a variety of answers you may want.
Everyone gets delusions of grandeur!” That’s what Han Solo said after being frozen in carbonite. I’ve been solving data problems for customers the last year and a half and am now getting back in graph DBMSs. We took a nice look at Titan last week, can’t wait to play with that some more. I’m going to give a bit of the same to Neo4j. All of this as prep for my talk at GraphDay 2016 in Austin, TX.
Smartphone and tablet experiences are a major force influencing product development, irrespective of industry, according to an internal survey we recently conducted with our clients. Business users want to access their software applications from a myriad of devices, all behaving correctly regardless of form factor.
As part of the work we’re doing to refresh our graph database evaluation for a couple of clients (and our upcoming talk at Graph Day!) we took Titan 1.0out for a spin last week. We’ll be doing more in-depth explorations on some in-house and public datasets over the next few weeks, but here’s some preliminary impressions based on a contrast with the Titan we came to know a year ago or so.
“Expero has partnered with us to envision and implement new products and key improvements to existing products that have both delighted our users and set us apart as innovators in the industry.”
Valent wanted to jump ahead of the competition and increase customers’ visibility into the lifecycle of their crop production to increase marketable-grade yield.
“Expero was fundamental to the success on this project, effectively communicating their progress throughout and helping our scientists develop an understanding of deep learning techniques.”
MyHouseby wanted users to love the experience of discovering and customizing their new home in new ways using new technologies.
Knowing who has the flu is hard. Doctors assess the small minority of sick people who make it into a clinic and send the test results to the CDC every few weeks. The CDC models this information and retroactively corrects their national influenza like illness (ILI) report every two weeks.
The data over fifty years of operation was messy and segregated, so the regional planners often relied on localized tribal knowledge to get customers product in time to meet SLAs. This intuition-driven delivery mechanism was ripe for system wide optimization to improve overall efficiency and reduce network costs.
We have predictive analytics for directional drilling that optimize recovery and minimize drilling costs and risks, but they’re too hard to understand and too hard to use.
Upstream geophysics & seismic users are demanding and specialized. Expero was asked to deliver something both innovative & that was thought to be nearly impossible but usable and familiar - and we delivered.
With Expero's help, we were able to identify must-have requirements and implement an iterative user-centered design process for the long term.
Our industry users are demanding and specialized. We asked Expero to deliver something both innovative and familiar - and they delivered.
Expero's deep knowledge of user experience patterns and philosophies enabled us to produce an outstanding product that was a quantum leap in our industry.