Skip to content
NextLytics
Megamenü_2023_Über-uns

Shaping Business Intelligence

Whether clever add-on products for SAP BI, development of meaningful dashboards or implementation of AI-based applications - we shape the future of Business Intelligence together with you. 

Megamenü_2023_Über-uns_1

About us

As a partner with deep process know-how, knowledge of the latest SAP technologies as well as high social competence and many years of project experience, we shape the future of Business Intelligence in your company too.

Megamenü_2023_Methodik

Our Methodology

The mixture of classic waterfall model and agile methodology guarantees our projects a high level of efficiency and satisfaction on both sides. Learn more about our project approach.

Products
Megamenü_2023_NextTables

NextTables

Edit data in SAP BW out of the box: NextTables makes editing tables easier, faster and more intuitive, whether you use SAP BW on HANA, SAP S/4HANA or SAP BW 4/HANA.

Megamenü_2023_Connector

NextLytics Connectors

The increasing automation of processes requires the connectivity of IT systems. NextLytics Connectors allow you to connect your SAP ecosystem with various open-source technologies.

IT-Services
Megamenü_2023_Data-Science

Data Science & Engineering

Ready for the future? As a strong partner, we will support you in the design, implementation and optimization of your AI application.

Megamenü_2023_Planning

SAP Planning

We design new planning applications using SAP BPC Embedded, IP or SAC Planning which create added value for your company.

Megamenü_2023_Dashboarding

Business Intelligence

We help you with our expertise to create meaningful dashboards based on Tableau, Power BI, SAP Analytics Cloud or SAP Lumira. 

Megamenü_2023_Data-Warehouse-1

SAP Data Warehouse

Are you planning a migration to SAP HANA? We show you the challenges and which advantages a migration provides.

Business Analytics
Megamenü_2023_Procurement

Procurement Analytics

Transparent and valid figures are important, especially in companies with a decentralized structure. SAP Procurement Analytics allows you to evaluate SAP ERP data in SAP BI.

Megamenü_2023_Reporting

SAP HR Reporting & Analytics

With our standard model for reporting from SAP HCM with SAP BW, you accelerate business activities and make data from various systems available centrally and validly.

Megamenü_2023_Dataquality

Data Quality Management

In times of Big Data and IoT, maintaining high data quality is of the utmost importance. With our Data Quality Management (DQM) solution, you always keep the overview.

Career
Megamenü_2023_Karriere-2b

Working at NextLytics

If you would like to work with pleasure and don't want to miss out on your professional and personal development, we are the right choice for you!

Megamenü_2023_Karriere-1

Senior

Time for a change? Take your next professional step and work with us to shape innovation and growth in an exciting business environment!

Megamenü_2023_Karriere-5

Junior

Enough of grey theory - time to get to know the colourful reality! Start your working life with us and enjoy your work with interesting projects.

Megamenü_2023_Karriere-4-1

Students

You don't just want to study theory, but also want to experience it in practice? Check out theory and practice with us and experience where the differences are made.

Megamenü_2023_Karriere-3

Jobs

You can find all open vacancies here. Look around and submit your application - we look forward to it! If there is no matching position, please send us your unsolicited application.

Blog
NextLytics Newsletter
Subscribe for our monthly newsletter:
Sign up for newsletter
 

Lakeflow Designer in Databricks: No-Code Pipelines in Practice

Databricks has firmly established itself as one of the leading cloud data platforms, bringing the Data Lakehouse architectural paradigm to the market. Over the years, what started as a data science and machine learning platform built on Apache Spark has evolved into a comprehensive platform for enterprise analytics.

Since 2025, Databricks has increasingly focused on features that counter the perception that the platform is only suited for developers and highly technical users. AI/BI Dashboards and Genie Spaces enable intuitive data visualization and natural language interaction. With “Databricks One,” a simplified web interface is available that abstracts away technical complexity and makes the platform accessible without deep technical expertise.

The next major step is a graphical editor for data pipelines: assembling ingestion and transformation processes through a visual interface using drag-and-drop components on an interactive canvas. This is enhanced by AI-assisted capabilities that understand the full context of available Lakehouse data. This new editor is called Lakeflow Designer and is currently available in private preview.

Recently, we had the opportunity to test the Lakeflow Designer in a client project and gain first-hand insights into its current capabilities. In many organizations, graphical data integration tools are still deeply embedded in the data landscape - often as legacy systems. Replacing these with a unified platform like Databricks would be a major step toward modern, future-proof data architectures.

What can the Lakeflow Designer already do today and what is still missing for a full replacement of established tools? Here are our findings.

Databricks Lakeflow DesignerDatabricks Lakeflow Designer: The New Graphical No-Code Pipeline Editor

Features and Positioning in the Databricks Ecosystem

The Lakeflow Designer, as a graphical editor for data pipelines, complements existing low-code building blocks within the Databricks portfolio. Through “Lakeflow Connect,” various data sources can be directly connected to the Unity Catalog, which serves as the central layer of the platform. In this way, federated access to traditional relational database systems is enabled, as well as querying REST interfaces of well-known cloud service providers. With Apache Spark Structured Streaming and the also newly introduced Zerobus ingest as an integrated event queue system, streaming data can additionally be ingested.

databricks_connectorsThe low-code framework “Lakeflow Connect” for ingesting data from third-party systems into the Databricks Lakehouse is already an integral part of the platform. Lakeflow Designer complements this by enabling downstream transformation processes on data that is already available in the Unity Catalog. In addition to the sources shown, data can also be ingested from virtually any other source using Python frameworks such as dlthub.

User interaction with fully modeled data takes place via dashboarding tools or LLM-supported chat interfaces. Lakeflow Designer therefore fills precisely the gap between low-code data ingestion processes and low-code business intelligence and analytics.

The core function of the Designer editor is the transformation of datasets already available in the Databricks Unity Catalog - for example, as preparation for visualization in dashboards, or for use in machine learning applications or agent-based AI systems. “Agent Bricks,” the no-code interface for developing agent-based workflows on Databricks, is also reportedly close to a full release in Europe.


Watch the recording of our webinar "Bridging Business and Analytics: The Plug-and-Play future of Data Platforms"

Webinar DataPlatforms Recording EN


Currently available features in Lakeflow Designer include:

  • Intuitive user interface: The Designer provides a graphical editor that we found to be highly intuitive and easy to use. Users can switch between a SQL code view (“Query”) and the new visual view (“Visual”).

  • Drag-and-drop: Pipelines are created by dragging and dropping nodes. A variety of operators are available, including sources, outputs, Databricks SQL AI functions, and classical SQL transformations such as aggregates, filters, joins, etc. Custom SQL code blocks can also be added for more complex statements.

  • Live data preview: A key feature is the interactive preview, which allows users to see data changes directly during the design process. At each node, both input and output states are displayed as a preview based on a small subset of the loaded data.

  • Annotations: In addition to standard transformations, notes can be added to the canvas, and nodes can be grouped and automatically arranged.

  • Orchestration: Once the pipeline reaches the desired state in the editor, execution can be configured directly as a scheduled routine via a Databricks job.

designer_2Operator nodes can be added to the pipeline via drag-and-drop or by selecting them directly on the canvas. In addition to data sources and sinks, typical SQL operations and Databricks SQL AI functions are available.

Lakeflow Designer User Experience: Working with the Visual Editor

The main entry barrier is actually finding the new Lakeflow Designer: the new editor is an extension of the Databricks SQL Editor interface, where users can switch between the standard code view and the visual editor for new queries via a rather unobtrusive toggle switch.

designer_3At each node of a pipeline, a live data preview can be accessed, showing both the input and output data formats.

Pipeline creation in the visual editor is intuitive and based on drag-and-drop functionality. Various operators are available, including data sources, outputs, AI functions, and classical SQL transformations such as filtering, joining, or sorting data. From the Unity Catalog side menu, all tables can be dragged directly onto the canvas with a single click to be used as additional data sources.

A key advantage for efficiently designing integration flows is the interactive live data preview, which makes changes to the data visible directly during the design process. For requirements beyond the standard operators, custom SQL nodes can also be integrated into the data flow.

On a technical level, each output node defined in the Designer automatically generates a Apache Spark Declarative Pipeline (formerly known as “Delta Live Table”), which creates a corresponding materialized view. Pipelines can be executed ad hoc directly from the editor and configured as scheduled routines.

In the background, SQL Declarative Pipelines are created for all output nodes, and execution workflows are generated as Databricks Jobs, including a refresh query task.

Professional development workflows are partially supported insofar as queries created in the editor can be stored in Git repositories and versioned. Designer pipelines are stored as special notebook file types (*.dbquery.ipynb).

Gaps and Future Potential

Despite its intuitive usability, the Databricks Lakeflow Designer still exhibits several functional  limitations. In particular, complex transformations such as PIVOT operations are not yet supported purely through the graphical interface and still require the manual integration of SQL code nodes. Such custom code components in graphical pipeline editors often become long-term maintenance challenges and, based on experience, tend to age poorly in teams with a certain level of personnel turnover. A significant limitation is also the lack of navigation and linking within the Databricks user interface, as there are no direct back-links between the catalog view, Spark Declarative Pipelines, and the actual Designer query. Lineage visualization is currently also limited to individual pipelines that create a materialized view and does not provide any broader overview of related tables or other materialized views.

designer_4All “Output” nodes in Designer pipelines generate materialized views, which are created via automatically generated Spark Declarative Pipelines.

For use in professional enterprise environments, there is - based on our test - still a lack of clear best practices and seamless integration into established DevOps workflows or automated deployment cycles via Asset Bundles. A pipeline created with the Lakeflow Designer in the Databricks web interface can be easily executed and orchestrated as a scheduled job; however, automated deployment to separate test or production environments via CI/CD frameworks does not yet appear to be supported. Likewise, query artifacts created with the Designer cannot easily be treated as standard pipelines - such as Python notebooks or simple SQL files - to ensure full lineage tracking and transparency.

A prominently advertised feature is the natural language input capability directly within the editor, intended for creating and configuring pipeline nodes. In our evaluation, this feature was not available.

Databricks Lakeflow Designer: Our Conclusion

Based on our initial test of the new editor, the Databricks Lakeflow Designer represents a highly promising and extremely intuitive entry point for self-service data modeling. A particular highlight is the interactive live data preview in the editor, which significantly simplifies and accelerates the design process of data integration flows. The fact that every pipeline created in the Designer results in a clean SQL query that can, if necessary, be reused in more code-centric development workflows is an important prerequisite for sustainable usage and high-quality data models.

At the same time, it must be noted that the tool is still in an early stage of development, where further improvements can be expected. Complex transformation patterns still require manual SQL code, and key enterprise capabilities - such as end-to-end lineage visualization, seamless navigation between components, and integration into professional CI/CD cycles - are currently not fully mature or documented. As a result, the Designer is an excellent tool for fast, visual results but still needs to significantly mature in terms of orchestration and DevOps readiness to fully operate in complex enterprise architectures.

In scenarios where pure self-service is the primary focus, the current state of the Lakeflow Designer is already highly valuable - for example, when a business unit independently owns data models and analytics within a dedicated Databricks workspace using a pure no-code/low-code approach.
Databricks continues to evolve as an all-round platform and is already very well suited today for business departments and less code-centric user groups.

Are you evaluating whether Lakeflow Designer fits into your data architecture or planning to replace legacy ETL tools with Databricks? Get in touch with us for a tailored consultation - we support you from strategy to implementation.

Learn more about  Databricks

 

FAQ - Databricks Lakeflow Designer

Here you will find some of the most frequently asked questions about the Databricks Lakeflow Designer.

What is Databricks Lakeflow Designer? The Lakeflow Designer is a new graphical data pipeline editor that enables users to create ingestion and transformation workflows without deep programming knowledge. It acts as a no-code interface where users connect nodes via drag-and-drop on an interactive canvas to model data flows. Technically, it bridges the gap between data ingestion (Lakeflow Connect) and analytics via BI tools or AI chat interfaces.
Who is this tool primarily intended for? The tool is primarily designed for business departments and subject matter experts (SMEs) who want to create data models and analytics in a self-service manner without relying on IT experts for every line of code. The editor is particularly suited for scenarios where a pure no-code/low-code approach is the primary objective.
What functionalities does Lakeflow Designer offer? A key feature is the interactive live data preview, which shows the data state at each node in real time. The editor also provides various operators for sources, outputs, AI functions, and classical SQL operations such as filtering or joining data. Users can also add annotations and group elements on the canvas for better structure and clarity.
How is a pipeline created in the Lakeflow Designer executed technically? Each output node defined in the Designer automatically generates a Spark Declarative Pipeline (formerly Delta Live Table), which creates a materialized view. Execution can be triggered either ad hoc directly from the editor or as a scheduled routine via Databricks Jobs (refresh query tasks).
Can pipelines created in the Designer be versioned? Yes, basic Git integration is available. Designer pipelines are stored as specialized notebook file types (*.dbquery.ipynb), which can be saved in Git repositories such as Azure DevOps, versioned, and reviewed through standard code review processes.
What are the current limitations for enterprise use? At present, best practices for automated deployment workflows (CI/CD) across multiple environments - such as via Asset Bundles- are still missing. In addition, lineage visualization is currently limited to individual pipelines. Complex transformations (e.g., PIVOT) still require manual SQL nodes, which breaks the pure no-code experience.
When will Lakeflow Designer be released? Lakeflow Designer is currently available only on request for Databricks customers in private preview. No official release date has been announced, but a broader rollout can be expected during 2026.

,

avatar

Markus

Markus has been a Senior Consultant for Machine Learning and Data Engineering at NextLytics AG since 2022. With significant experience as a system architect and team leader in data engineering, he is an expert in micro services, databases and workflow orchestration - especially in the field of open source solutions. In his spare time he tries to optimize the complex system of growing vegetables in his own garden.

Got a question about this blog?
Ask Markus

Lakeflow Designer in Databricks: No-Code Pipelines in Practice
10:59

Blog - NextLytics AG 

Welcome to our blog. In this section we regularly report on news and background information on topics such as SAP Business Intelligence (BI), SAP Dashboarding with Lumira Designer or SAP Analytics Cloud, Machine Learning with SAP BW, Data Science and Planning with SAP Business Planning and Consolidation (BPC), SAP Integrated Planning (IP) and SAC Planning and much more.

Subscribe to our newsletter

Related Posts

Recent Posts