Build better products with our product team
Add a feature to 'pin' the execution que in a tab.
When you drag a table to the execution queue, you can see what table is being executed in the queue. However when you use the checkbox after right-click and Execute, you only see "Execute Execution Package Execution Queue Default" which becomes rather confusing if I have more than a few queued and want to know which one failed. Could the name of the thing being sent to the queue be displayed?
Hi, Currently i you want to add the output of table or view in custom insert table, you have to scroll all values in 'insert from table' dropdown which has no logical sort mechanism (if you have 200 objects in your project that could be quite a job...) searchable (wildcard) dropdown might do the trick. second option would be to change to alphabetical sort order
Hello, TimeXtender currently allows to execute only SQL scripts. Script actions can be configured to only call a T-sql statement. however, in a DWH solution, usually scripts are handy for some functions like downloading data in json format from an api using python, or moving csv files from one directory to another directory before considering as a CSV data source for ODX. If TimeXtender can allow to execute such python and powershell scripts to be configured as an execution step in TimeXtender project, it can help to do end to end process through TimeXtender execution packages.
we would like to have the ability to see the relation diagram based on a total Data warehouse instead of a relation diagram based on only 1 table.
I would like to make a case for modeling tables that in reality will be completely virtual. For example by generating views instead of ETL. Context Business Units have all the tables. For all kinds of reasons, you may have multiple business units. This prohibits integration and transformation in the business unit tier. To model these transformations in the DWH tier, you have to move the data there. Problems Problem 1. In almost all situations, moving this data to the DWH is not needed but cost time. If you are in Azure, you will typically have all your tiers in the same database. If you are not in Azure, your tiers are probably physically close enough to enable direct queries between them. Problem 2. Additional tables exist in the DWH that are "2nd grade citizens". That is, they are only there to support another table but not because you want them there. This makes the model more complex than you want. Solutions Several solutions come to mind. Solution a. Improve the "disable physical table" feature on the Performance settings page so that we can create a completely virtual table. In this scenario, the valid table would be a view directly on the source table. This has the advantage of eliminating two data movements compared to a regular table. The drawback is that you still have the table in your DWH model. It is expected that for such a table, functionality is limited. (No field transformations, validations, checkpoints, incremental, history) Currently, we can achieve something close to this by dragging a table from the business unit to the "views" node in the DWH. However * this does not keep the mapping. If the source table changes, the view stops working. * objects modeled as views kill data lineage. Since in the proposed solution there is a 1:1 field mapping, data lineage should be preserved. Solution b. Allow for business units objects to be used in parameters (for scripts and views), so that the existing option to drag a business unit table to the DWH "views" node is more robust. This has the advantage of eliminating data movements and decluttering the list of tables. Significant drawback: no data lineage which is probably one of the key reasons of using a model driven design tool like TX. Solution c. Allow for objects from business units to be used directly from other tiers in certain transformations, such as lookups and table inserts.
It would be nice to have an ERD visual tool to make a datamodel
I'm using 19.6.1. It is possible to create a perspective in ODX on tbales, views, database schema en script actions. Is it possible to add dats source to this?
When deploying an SSAS endpoint and the option ''Process model offline' is enabled. The database is deployed to the online model. In my opinion, it would be logic that deploying the model will also create an offline version. When the model is executed, the offline model will replace the 'online' model.
Add an option to create the roles in the semantic model dynamically. When I have a table in DH (or Excel) that consists of 2 columns (RoleName and RoleMembers), it would be handy the roles are based on this table.
When creating a SSAS Tabular endpoint from a Semantic Model and consuming it in PowerBI you will find that all numeric fields are given Sum() as default Summarization. While this makes sense in many cases, it doesn't in others. I.e. you may have a code that is stored as a number, but should not be summed. In SSAS Tabular databases you can set the 'Summarize By' property on a field. It would be very useful to be able to set this property from Discovery Hub. I do not like the generally proposed solution of converting non-metric numerical fields to character types to avoid summarization, as it feels wrong to change a DWH layer for such a reason. See the following link for some further context: https://www.sqlchick.com/entries/2017/11/22/why-the-default-summarization-property-in-power-bi-is-so-important
During a workshop I noticed you can view raw data from the Hub. But if you want to analyse it one step further, you already have to create a custom query, or open up SQL Management Studio, etc. It would be very convenient if you can get some basic profiling results on columnar level from within the tool, things like: min($column) max($column) count(distinct $column) amount of NULLS in $column amount of BLANKS in $column top 10 contents of $column Best Regards,Hans Westerhof
Hi, we've got a lot of old projects using a 2-layer approach (staging and data warehouse). For a lot of those projects we would like to add an ODX Server underneath the staging. Doing this manually is a huge amount of work. It would be much easier (and better) if TimeXtender would have a wizard to upgrade such a project. Best regards, Peter
Add feature to make snippets for semantic models.
Add execution logs on semantic models. This way we can see the Execution Overview Log and we can easily check how much time it takes to execute the model and the rowcount of the tables.
Is it possible to add a field DW_SourceTable in the default fields, so you can trace in the data were this is coming from. At a project I have database names like below: 6401 - Company 1$Appointment6401 - Company 2$Appointment6409 - Company 1$Appointment
Would be super handy if, after synchronizing a source, all fields have checkboxing. New fields -> check to add them Deleted fields -> check to convert to custom field (useful for historical tables) Example UI attached
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK