Build better products with our product team
When deploying and executing a Semantic model to an SSAS Tabular endpoint. TimeXtender does not check the syntax of the measures (DAX or MDX).
Could another option be created for deletion in History tables. Either the option to push all the data from the lastest active record into the tomstone record or a field [SCD Last Active Record] that acts like an indicator that this record was the last active record before the deletion. This way the last active record is easily selectable. In some of our sources we don't have a reliable enough date we can use to match with dimensions to find the correct surrogate key. We therefore want to display the latest active record of a dimension.
Especially important for semantic scripts where the "show translation" option was removed in a recent version, but also very useful for regular scripts.
The AX adapter (and probably other data adapters in TX) have the amazing feature of right-clicking a table to see all the Field information and relations to other tables. This is an amazing resource and tremendously helps in correctly building the ETL for such a datasource. However, as long as the window is open, I am unable to do any action in TX. It would be really sweet if the pop-up functioned the same as when you CTRL+W a table to a new window so I can just have the window with information open while I edit the tables.
If you change a stored procedure, Discovery Hub knows a Deploy is required (the orb in the bottom left changes to red), however there is no visual queue on the Stored Procedure itself. It would be good if Stored Procedures are coloured red and given a * just like other objects in Discovery Hub.
We have some many saves that it's impossible to find the right one. It would be handy if we could "tag" or "label" revisions: * Noteworthy milestones * Releases * Etc. Additionally, in a future cleanup functionality, you would optionally keep revisions that have tags.
Hi, when creating / editing an object (view, stored procedure, script, …) where you can use the parameter panel, it would be a big help that used parameters (tables, columns, ..) are highlighted in the panel. It would be even better that the objects in the panel can be sorted and filtered (just the same way as we have for tables / columns in the data source panel). BR, Peter
It would be very useful to be able to import existing defined tabular models (as .bim files) into the semantic layer in a way that features which are currently not yet supported could still be developed (in other tools) and integrated and deployed.
It would be great if you could guard deployment on all objects (not only tables) e.g. views, stored procedures, ...
It would be nice for certain data sources that you need to connect to multiple times, if you could duplicate a source or use an existing source as a template. this would save time and frustration if all you need to change is a json path or an x-path or if you just need to change one small part of the URI to get the appropriate table.
It would be nice to be able to individually execute tables in a semantic model. I.e. process a table in the SSAS Tabular context. I have a small fact table that should be frequently reloaded that is coupled with a large fact tables that are reloaded nightly.
Currently, table queries do not respect data type overrides specified on the source. In the table query you can only convert'/cast to types known to the source system, whereas the data type override lets you cast/convert to Microsoft SQL Server types. Please make it so that data type overrides are also applied to the result of table queries.
Add an indicator to fields that shows how much fields are mapped. E.g. I have a Creditor table that maps creditors of 8 sources. Every field in the table needs a mapping to all sources. I use Smart synchronize to do that. But sometimes when I have a typo in the field name, the field is not mapped. Now I need to open all mappings on the field to check if all fields are mapped. Or I need to check the Data movement window. It would be much easier if I see the number of mappings from each field. E.g. CreditorSK (bigint) (8)(when 8 fields are mapped)
Just installed the newest release 18.10.1 and noticed the Qlik tab is gone. All functionalities of the Qlik tab would be moved to the semantic tab I heard. But I am still missing a feature for Qlik Endpoints in the Semantic model. It is not possible to add Pro- or Post scripts to a Qlik Model in the semantic layer. Please move this function also to the Semantic tab.
I regularly have cases where I want my Qlik model to include only a subset of my data. Let's say I have a table dim_product containing all the product dimension information. I want to make an app about specifically products of a certain product group A. In TimeXtender I can use the filter option to only load the products where productgroup = A. Next step I want to load all sales transactions for that product group. In Qlik Sense script I would then write something like this: LOAD * FROM Transactions WHERE EXISTS(ProductID) And the transaction table will contain only transaction rows matching the product IDs where the product group equals A. Sadly TimeXtender lacks this feature, forcing me to either bring in the productgroup to the fact table as well or writing custom Qlik Script where I first load the entire table and then do a resident load.
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK