Build better products with our product team
Hi there, First of all, I'm getting rave reviews from our client about the new 19.6 interface! Well done! They had a couple of thoughts on the UI, which I'm turning into requests on their behalf. The first request they had is for a "twilight" mode. Right now, there's a dark mode and a light mode, but they're finding the light mode brighter than they'd like, and the dark mode darker than they'd prefer. Also, the window backgrounds are still pure white, even in the dark mode! This makes it not actually all that dark. They've suggested a color palette similar to the default palette in Power Bi for the modelling mode - cool grays, kinda like this: I think that would be a nice touch, and be much easier on the eyes than the current options, especially if you're sitting in front of Discovery Hub all day long.
Configuring global database settings for a text file is difficult. It takes two clicks to get into a text field, and three clicks to select something from a dropdown menu. There are a lot of settings in this data source and configuring even one across multiple environments is a pain. Configuring several can get pretty agonizing. Being able to click into a field directly instead of selecting it first, or being able to open a dropdown menu with one click would really help improve the user experience.
We all know that Discovery Hub is extremely reliable at moving rows from one database to another. However, proving that to other folks can be a challenge. Also, scripts, validation rules, incremental loading, history rules, and primary keys can sometimes cause a destination and a source table to come "out of sync," which is sometimes intended, sometimes not. In either case, this desync makes it harder to validate that the load is working as intended. Adding an extra count option to the execution package - a count rows in the source table - would a small thing to add, but an extremely powerful validation tool. This capability would make all kinds of tasks, from basic development to upgrading the software, much easier to validate. Ideally, this count would take into account data selection rules and data source settings, so we'd be able to quickly compare what we expected to get vs. what we actually got. Thanks!
We need scripting in the tabular model in the same fashion as it is possible in the OLAP model meaning that we can add scripts at different levels as well as pre and post scripts on the model it self.
We are in the position where we are working major changes against a given project. The changes are expected to take some time to complete. Meanwhile, smaller MOB projects force us to make changes. We would need a solid methodology for deploying ONLY the portion of the project necessary to deploy the MOB projects. We are a SOX company and separation of duties are a large issue here and we do NOT want to manually make the changes directly in production. We also do not want to deploy untested objects into our production environment by deploying the entire project. This is a serious issue for us, and we need a solution if we are going to continue to make use of TimeXtender as part of our BI solution. Thanks!
Currently we need to manually enter DAX for any calculations apart from the Standard Sums Etc. It would be great to have the ability to add Snippets for Tabular so er can develop a library.
Microsoft implemented security in a different fashion with SSAS Tabular when compared with Multi-Dimensional. With multi-dimensional we had the database concept with multiple cubes... but a security role could be defined and used across all of the cubes in the database. Tabular is a bit frustrating in that we have to create a role and define role membership for each model. But this could be TimeXtender's time to shine!! :) I'd like to propose the ability to create a shared security role for the Semantic Models. A role could be defined with membership applied, and that role can then be linked to the models where security can be further defined. Essentially, the shared role would provide a centralized membership list rather than having to recreate it for each model. We have 7 models which are typically deployed and it can be a bit of a hassle to maintain membership lists in 3 environments for each role in each model. Yes, we can do this with AD Groups... most of the time. That capability doesn't really work with Azure Analysis Services, so it becomes very painful there.
Right now, the default setting for transformations is "Custom." The overhwelming majority of transforamations we do are "Fixed," and it would be nice to be able to set this as the default type when adding a transformation. In addition, when you add transformations to one field and select another, the transformation type snaps back to Custom no matter what. This adds several additional clicks if you are adding many transformations that aren't custom. If the option for a default transformation type can't be added, then at least maintaining the current transformation type when changing fields would be really useful. The use case here is that we are adding fixed values to handle cases when DSK or lookup surrogate foreign keys are empty. When repeating this process for dozens of fields across multiple tables, selecting and re-selecting the transformation type begins to feel very laborious.
It's not currently possible to see what code TimeXtender is committing to the various front-end endpoints. The ability to see this code is extremely helpful when it comes to actually writing out code in the program itself. Without this, it's either possible to parameterize your code, or make the code easy to understand. You can't really do both. The lack of "Show Translation" makes very useful features like fully qualified fields a really mixed bag. It can be confusing to tell what you're doing in more complicated measures, because the code you see in a properly parameterized Script window is so different than what is actually sent to the front end system. In order to create something understandable, the user may be forced to use less efficient, more verbose options, like adding the table name and the field name individually. A user might also simply decided to skip the Tabular front end altogether and design the tabular model in Visual Studio, just to be able to see what they're actually doing. Clearly, this is an outcome we'd like to avoid!
HiWhen you have a lot of cubes and dimensions, it can be a big task to minimize the menu, to get what you want. See video.https://ilos.video/phk2zW I would like it to be totally opposite, so it is minimized to cubes and dimension at startup instead. Like this.
The new multi-pane layout of Discovery Hub is great, but it has a bit of a problem in that the windows aren't context sensitive in some cases - what happens in one window happens in all windows. An example of this is creating relations between tables. Doing this with the same database open in two panes makes this process much easier and faster! Except that creating a relation cause the UI to snap to the relations node of the table where the relation has been created. It does this for all panes and windows that contain this table, even though you'd want it to only happen in the "destination" pane and nowhere else. Honestly, as a more advanced user I find the "snapping" functionality to be frustrating. But if it has to happen, making it happen in all windows and panes makes the new layout much less useful for tasks like creating relations.. Another example is opening tables in new windows. If you have multiple panes open, and open tables in a new window, if you close any of the panes, all of the open windows close. It would be very helpful to close only the windows that were opened from the pane that's being closed.
It’s very common for a data source to have almost identical settings, except for one or two fields. This makes setting up global databases frustrating, because all connection information needs to be added each time. Being able to copy settings between environments in a global database setup would be extremely helpful and make the window feel more automated.
We currently have the ability to create role based security on individual fields within Semantic Model Tables, but there isn't the ability to restrict access to an entire table in the model.
Having the ability to execute script actions for a Semantic Model (specifically SSAS) would give developers the ability to use additional features of SSAS which aren't currently exposed in Discovery Hub. This could help with implementing dynamic security roles among others. I equate this with the ability to execute XMLA scripts for cubes which allows us to change SSAS properties not exposed in the UI.
If you see in the image below i have a sql snippets. It would be nice if the project variables also was available.
The securable views are pretty useful, but suffer from a couple of important limitations. The first is that each view will contain every record from its source table. This isn't always desirable - it may be ideal to create a view of a fact that contains only certain kinds of transactions, or a view of a Type II SCD dimension with only the current values. Right now, the only way to accomplish this is to create physical versions of these tables, which is a clunky solution at best and unviable at worst. The other problem is that you can't control what columns end up in these views. The most obvious problem issue is that this means that all of the system fields will show up in the securable view, which is not desirable because these views are almost always exposed directly to the user. It also reveals the existence of all the columns in the table to everyone who has access to the view, even if access to those columns is restricted by a security policy. If these two features were added, securable views would be considerably more flexible and powerful tools!
Right now, you can’t tell if something has a work item or not by looking at it. Changing the appearance of an object with a work item on it would be extremely helpful in helping to avoid accidental collisions.
Work Items are missing from the Semantic Modelling tab. Work Items are helpful to make sure multiple developers do not step on each others' changes. That functionality is currently missing.
Currently, we have 3 parameters which can be used in the Notification email Subject: Execution packages which are scheduled normally have the environment name in them, but sometimes we need to manually run a more generic execution package in an environment... for instance, to force a full load. In that scenario, it would be helpful to be able to use the %ENVIRONMENT% project variable in the notification. Our only alternative is to setup multiple copies of execution packages right now.
When we are deploying Discovery Hub over and over again, one of the most painful aspects is setting up the Environment Properties. I think there are a couple things which could greatly improve that experience: 1. Ability to clone settings from one environment to the next for a Global Database. If I get the Global Database setup for Dev, it can make things much faster to clone those Dev settings to QA and then tweak the info which needs to change! Right now I usually copy and paste one field at a time. 2. Ability to clone a Global Database to a new one... this would be particularly useful when there are multiple SSAS Tabular Models. Right now I need to manually create a Global Database for each model (7 of them) and manually set all of the settings for those in each environment (21 settings boxes ahhh!) Thank you, David
Most of the time, a field in a view will be called the same thing as the field it pulls from in the source table. Adding an automatic map feature based on Name would make using Map Custom View Fields a lot easier and more friendly.
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK