Build better products with our product team
Currently, if you create a relation between two tables and then do a lookup between those tables, the default option is to not use an explicit join. However, explicit joins are really useful for understanding a lookup at a glace. If explicit joins are not the default behavior, it would be really helpful to be able to specify a default behavior for new lookup creation on the project level. It's only one click to choose an explicit join, but over hundreds of lookups, it becomes a serious time waster.
On table insert it would be nice to be able to fill out the system field 'DW_SourceCode' just as any other field on the table you are filling with data. Regards Jens Jørn
Currently, junk dimension tables are automatically keyed with varbinary 64 fields. This takes up a lot of room in a fact table and can’t be used as a join in Tabular. Can junk dimensions share the functionality of the supernatural key stores by storing the hash locally, and using an integer as a key instead?
Some features have been added recently to automatically add the joins necessary for handling SCD Type 2 lookups from a History Table. I'd like to suggest another enhancement which would save a ton of time and manual clicking when we're adding Surrogate Keys to fact tables. Dragging and dropping DW_ID from a history table to a fact table is easy. But the process of adding the Fixed transformation for a -1 "Unknown Bucket" requires a lot of clicks, is error prone, and is something I despise doing. I'd like to suggest the following enhancement to the new dialog that appears when dragging a field from a history enabled table. Note that this option should only be available if dragging DW_ID. A simple checkbox to add the surrogate key "unknown bucket" transform fixed transformation and also add the Is Empty condition: There have been several times where developers have clicked on the wrong field for the condition, or forget to add the condition, so this would be a great help. Also, try adding the transform and the condition 25 times for 10 fact tables. Your mousing hand will quickly agree with my suggestion. I promise!
[This subject has been brought up a couple times, but the original post is from 2017. I'm reposting my recent comment here because I feel like nobody will see the old 2017 post because this site only shows the most recent by default. Original post: https://support.timextender.com/hc/en-us/community/posts/115018657763-Ability-to-include-powershell-script-actions-in-TX] We and several of our clients would benefit greatly by having the ability to execute a PowerShell script from within TimeXtender as Script Actions. Here are a couple scenarios: 1. Scaling Azure SQL DB instances with SQL is problematic. I can use a SQL statement to scale a DB up prior to the load, but we are unable to create a loop and wait for the DB to finish scaling. The problem is that we can wait.... but the session disconnects when the larger DB is ready and Azure SQL switches the connection. The SQL Stored Proc is never able to determine if the database scaled successfully. A Powershell script could be called in order scale the DB as an alternative and since it's session will not be disconnected during the resizing process, it would allow for a scale, wait, confirm, and continue process. This could be implemented as an external executable for execution packages, but that would require multiple execution packages to properly sequence the scale up, load, scale down process. If Powershell was available as a type of Script Action which could be called at the table level, it would also satisfy the requirement of a client of ours as well. (See #2) 2. My client has written C# processes to extract data from another system. Ideally, these extracts needs to be a part of the Discovery Hub load so the data in refreshed in a timely manner without trying coordinate many different schedules. Their C# process to pump data into custom tables may be rather unique to the user community except for a mention here: https://support.timextender.com/hc/en-us/community/posts/360033987671-Possibilty-to-execute-scripts-like-python-powelshell-as-a-custom-script-execution?input_string=Powershell%20Script%20Actions However, this is something else which Powershell could help with instead of creating many clumsy SSIS Packages and External Executables. Custom SSIS Packages really are not an option for them since there would be so many to create. SSIS *Could* be an option IF the external executable SSIS Package calls allowed parameters to be passed in. Then a more generic SSIS package could be created to execute a process for which the name is passed. This however doesn't help in an Azure scenario where SSIS isn't available.
I had a request come from a customer today which seems like a good idea. They have to maintain strict SOX controls, so knowing what version was deployed, by who, and when are important pieces of information their auditors need to know. Currently, when a project is transferred between environments, there isn't any notation in the project version notes. It would be very useful to simply add the information displayed after a successful transfer to the target environment's new version created by the transfer: So, in this case version 69 was created in PROD. It would be very nice if the information circled above was entered into version 69's notes. Otherwise my SOX compliant customer will need to maintain manual logs of transfers.
Enabling the "Show System Control Fields" in project settings will not make system control fields visible for existing table in the project. The tooltip does indicate that it " will automatically be added to new tables...". However, I think it would be useful to add the option to show system control fields retroactively, so you don't have to enable them for each existing table object manually. Thanks
Please add the following new features... 1. When selecting multiple text file as a data source, please add the option to import only the latest file into the warehouse rather than all files, based on the timestamp of the individual files. For example, if I have Aging20180401.csv and Aging20180402.csv in the same directory, I would like to be able to import only the "02" version as it is the newest one in the directory. 2. Please allow for multiple files to be in a directory, and have the option to load those files into the warehouse in chronological order. For example, if I have Aging20180401.csv and Aging20180402.csv in the same directory, I would like to be able to import both files but make sure they come into the warehouse in a selectable order (oldest first, newest first, alphabetically, etc.). Thanks.
If there are rows in the Warning and Error tabs after execution, there really needs to be a notification. I've run into situations personally where I have forgotten to check these tabs and then struggled for quite a while trying to figure out why data wasn't being loaded into my table. I have several other clients who have asked about notifications as well. This should be implemented to support ad-hoc executions in the GUI as well as scheduled or queued execution through an execution package. 1. In the case of ad-hoc / manual execution of objects in the GUI, it would be wonderful if the number of error and warning rows could be displayed in the execution dialog window. OR, make the tabs show a different color, display the count of error or warning rows. 2. For scheduled or queued execution, email notifications need options available to include error and warning counts, possibly grouped by tablename. I would want to have a separate email sent out so I explicitly know if errors occurred during the last load without having to open up each success email. Obviously when using TimeXtender for a while, one gets used to checking the Error and Warning tabs on a regular basis, but this is not intuitive for new users. Thanks!!
TX is awesome for removing a lot of the tedious work involved with traditional ETL, but there are still some things it can improve upon.... for instance, multi-select of fields in various cases would help speed up development a great deal. The following are a couple of enhancements I'd like to see: 1.) If I add a bunch of Surrogate Key lookups to a table, they all end up at the bottom. I prefer that all the Keys are at the top of the table, so I then have to drag each one up individually. A multi-select capability for re-ordering would be incredibly useful 2.) Multiselect tables added to the Business Unit or Data Warehouse and drag them to reorder all at once. This would make things much easier with a project that has many tables! 3.) Multi-select fields in table #1 and drag into table #2 to create multiple lookups at once. This one isn't as big of a deal, but would help save clicks. I think there are probably more places where multi-select makes a lot of sense. I'll add more to this post as I come across them.
Hi It would be great if security was redeployed after a table is deployed. At the moment we may secure a table but when we deploy the table the security is dropped when the table is recreated and we have to then remember to redeploy security other wise user are not able to view the data thanks
In many transformations you need a reference to the "current field". Example a tranform in [Field A] could be: coalesce(nullif( [Field A], N''), N'Unknown') In this case, it would be a huge improvement of we could something like the following instead of having to assign a parameter. Especially when working with SQL snippets it would greatly improve usability as it allows drop-in snippets that don't need parameterization. coalesce(nullif( [CURRENT_FIELD], N''), N'Unknown') It should be noted that in the case of a self-reference, having an actual parameter has little benefit anyway as a self reference has no lineage impact
Again requesting to add the option to sort display folders in this window instead of showing them in order of creation. Can you guys please make the reflex to add this functionality by default...The number of feature requests to have alphabetically sorted lists in this forum are almost beyond counting. Also implement this on other levels as well, it is so tiring scrolling up and down lists just to find the thing you need.
It would be nice to set the data export as a global database since we are using test data and production data. Every time we transfer from test environment to production environment we have to change the paths of the data exports manually to our production file server name.
When creating a tabular model, I might use the same DWH view twice. E.g. when creation a supply chain model, there should be a 'shipping location' and a 'receiving location' with both dimensions built on the same DWH view 'locations'. Once one of both dimensions is built in a tabular model, it would save time when one could just copy/paste the dimension and just change the name (besides the name, both dimensions are identical).
We prefer to add a prefix on every field of a table. This because when pushing data to Qlik, the fieldnames must be unique.So on every field in a table we add (in most times) a logical prefix.When having a table with a lot of columns this can be a lot of work to change it for every field.A option to prefix every field of the table for once can be very useful in this case.
In this case we have a package that runs once a day and takes 1 hour to reload. Besides this there is a package which run every 5 minutes the whole day long. Now it gives an error when running the once a day reload and try to start the every 5 minute reload. After the reload is done the every 5 minute reload will run again successfully.At the moment there is a solution to create "ghosts" projects. But in my opinion this is not desirable.
After creating a new execution package a full deployment is needed. This can take a lot of time. I understand a deployment is necessary for the changed objects which are included in the execution package. So I prefer a deployment of only the objects that are changed and are only included in the just made execution package.
At the moment is it possible to create two endpoints in one semantic layer so it will create two apps in Qlik Sense. Sadly, the name of the app could not be the same. I would like to publish app one to stream one and app two to stream two and the apps should have the same name.
It would be nice if you could create a "dynamic" work item in which all tables/views you add or amend are automatically included as long as you have it selected/active. Kind of the same way the dynamic perspective are working right now. At the moment you have to remind to create work items afterwords or add multiple work items with the same description one by one which can get overlooked quite easily.
Currently the fields added by history functionality to indicate status are of type bigint. While this may have less of an impact for relatively small dimensions, it will add up for larger tables. Using a bit would mean one byte per record and fit with the boolean nature of the field, using a bigint for both takes 16 bytes per record.
You can create master items in a Qlik Sense Endpoint by adding custom measures to the semantic model. It would be nice to be able to also add a description to the master item so that our front-end users can be guided as to how to use the master item. My suggestion would be to add the TX description to the Master item's description. In the same vein adding Tags in TX could translate to labels in the QS master item.
When exporting data using the CSV export we are now forced to export the CSV using a single name. However, sometimes you don't want to export the full table, but just a new increment (for example an hourly export of the new records of that hour) to enable the target system to easily load incements of data instead of the full set. We can create that behaviour with a custom selection rule, but the file name will be the same for each export. The suggested feature would allow you to define a custom naming pattern so that for example you could enumerate the exports or incorporate the date and time of the export.
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK