Build better products with our product team
There are different windows where I would like to have a 'select all' checkbox. For example in the History settings. When I have a big table I have to click each field one by one and this will take a lot of time.
As many requests are allready in place to sort various lists (which are not affected by sorting the perspective) When adding a table insert, the list is unsorted, it's also not the order they are created it's just random and when the list is long, you have to visually scroll to find the object you need. When selecting a column in the table insert, the list is not sorted. And allthough you can type the first letter, but that only works for the first items with that letter, when they are down the list again with the same letter they are not found. There are more, Ill try to add them later on'.
When creating a custom table insert, when you map some tables to the parameters. By default the Raw table is mapped. Normally I always use the Valid table. Please map the Valid table by default instead of the Raw table. Or maybe create a project setting which allows you to set the default mapping (Valid of Raw).
It would be great to have some sort of measure library for the semantic models. At the moment the measures for the semantic models are part of the semantic model. It is also not possible to copy a measure to another semantic model. It would be great to have a central place to add all measures for the semantic model. And just point to the library / individual measures from the semantic model to include them in the model. This way you can change a measure in the library and it will be changed in all models the measure is used.
Add functionality to copy / clone a measure in the semantic model. Including mapping of parameters ofcourse.
Add an option to enable on project level to execute all executions to the execution queue by default.
I'm really excited about the Syntax Highlighting and Autocompletion option for transformations, custom table inserts and data selection rules in the newest version of TimeXtender. But unfortunately, this is not working in the Query Tool. Please add this functionality in the Query Tool!
Provide an option to set display folders on Dimension or Fact level.Each time someone adds an hierarchy, measure, attribute we have to tick the correct display folders. In most cases a display folder is used for an entire fact or dimension and not for one attribute.
Add option to create work items on Semantic Models, Semantic Tables and Semantic Model Measures.
Recently I found out that the TimeXtender implementation of the Qlik endpoint bypasses the Qlik Load balancing. In the ideal situation the following workflow would happen: In TX end-point we add the hostname of the Qlik Central node When the TX endpoint is exectuted, TX calls the central node Central node distributes the task to one of the available nodes for execution Currently however, the hostname that is called will always be the node that executes. This is even the case when the central node is set to Master only and therefore should not be performing any reload tasks whatsoever. I suspect this happens because TimeXtender calls the app:Reload endpoint in Qlik. Perhaps it would be possible to have the TX deploy create a task in Qlik that is then called on execute using the task:Start API endpoint. In any case, TimeXtender should work with Qlik in such a way that it allows the load balancing of Qlik to do it's job in order to facilitate bigger enterprise size architectures.
Already asked by Peter Jensen with 9 upvotes in 2017. No comments from any TX Employees so far.So even today I am forced to modify the cleansing procedure and add my own logging...It would nice to be able to have an option per table by which you can enable extended data cleansing logging.This would log the start and end date of each step in this procedure to a dedicated log table. (Have to do this when a proc starts throwing timeouts or takes a long time all of sudden)I think this takes about 15 minutes to implement into your code base.At the start of the proc you check if the logging table exists else you create one.Before each step in the proc you store the datetime in a variable.At the end of each step you write a record to the log table and reset the start time again as such: IF OBJECT_ID(N'etl.HAWEmployeesPlanningMT_DataCleansingLog', N'U') IS NULL AND @enableExtendedLogging = 1BEGINCREATE TABLE etl.HAWEmployeesPlanningMT_DataCleansingLog (Id BIGINT IDENTITY(1,1) NOT NULL,Version BIGINT NOT NULL,Step NVARCHAR(1000) NOT NULL,DateTimeStart DateTime,DateTimeStop DateTime,CONSTRAINT [PK_bb573581_2114_4b3e_b1bd_042488deac10] PRIMARY KEY CLUSTERED (Id ASC) ) ENDSET @DateTimeStart = GETDATE()SET @step = 'Keep field values up-to-date: detect lookup value changes'-- Keep field values up-to-date: detect lookup value changes/* TX generated SQL */IF @enableExtendedLogging = 1INSERT INTO etl.HAWEmployeesPlanningMT_DataCleansingLog(version, Step, DateTimeStart, DateTimeStop) VALUES(@version, @step, @DateTimeStart , GETDATE())SET @DateTimeStart = GETDATE()SET @step = 'Update conditional lookup fields (Many lookups, Take the first value):''CustomerHQId'',''CustomerCode1'',''IsTardyCancellation'',''IsExaminationsFinished'',''DateExamination'',''ExaminationPart1ResearchCode'',''MedicalExaminationCode'',''ExaminationTypeCode'',''MedicalExamination_IsPeriodical'',''CategoryCode'',''SubmissionCategory'',''DateInFunction'',''DateOutService'',''PlanningTimeCell_Time'',''PlanningEntityDate'''-- Update conditional lookup fields (Many lookups, Take the first value):/* TX generated SQL */IF @enableExtendedLogging = 1INSERT INTO etl.HAWEmployeesPlanningMT_DataCleansingLog(version, Step, DateTimeStart, DateTimeStop) VALUES(@version, @step, @DateTimeStart , GETDATE()) https://support.timextender.com/hc/en-us/community/posts/115011859306-Logging-during-Cleansing-Stored-Procedure
Hi, Some of our customers want to setup 2 ODX Servers (1 for dev and 1 for test and production). On the Dev ODX Server we would like to connect to development-db-sources. So, it would be very handy if we could automatically swap between the 2 ODX Servers using the multi-environment setup of Discovery Hub when pushing a project from dev to test / prod. Best regards, Peter
Hi, At the moment the ODX server is quite stand-alone. There is no way to execute ODX task as part of package execution. project execution will carry on regardless if the odx task has successfully executed or not. One possible solution would be to add all ODX task as 'External Executables' In this way you can add the ODX task to the project's execution module
When dragging a table from the ODX to a DW using the secondary mouse button, we are able to use a field selection dialogue to choose the specific fields we would like to add to the new DW table. However, if I have an existing table, there is not an easy way to add just one or two fields as needed. Instead I have to synchronize the ODX table to the DW table, which will add all the remaining fields. Then, I need to delete the extra fields which are not needed. In some cases, this could be 150+ fields! Ideally, I'd LOVE to see the traditional Data Selection pane being used here. That is an outstanding feature which is sorely missed in ODX projects. Especially the ability to preview the data and select fields right from that preview. I understand that doesn't really fit with how the ODX functions, but the ability to select individual fields to add to an existing table is a must have.
In SSAS tabular there is an option to import a JSON file for multi-language support. So we have a possibility to support our customers with a dutch or English Cube. You can read more about this option: https://www.mssqltips.com/sqlservertip/4547/multilanguage-support-for-ssas-tabular-models/ Is there such solution in TimeXtender where we can set a second language?
Not really a feature for Discovery Hub, but it would be nice that we can do an official TimeXtender exam and earn a certificate.
I know this post is a little bit different of the conventional post on this forum.But I am noticing a lot of posts with massive up-votes or a myriad of posts on the same topic indicating an urgent request from the community.However it seems these posts are not getting any attention from TimeXtender for quite some time now..., yet posts with 0 or 1 vote are being answered... Is it because you guys are planning on a massive engine overhaul or something else. Feedback on these topics would be appreciated.
Add an option to clone a Semantic Model. (just like the Qlik Models in the Qlik tab)
We have a data source (Afas "Profit", Dutch ERP system) that we can only access through a SOAP API. This API allows you to do a call and receive a table in XML format. Calling this SOAP API is quite easy from powershell, however it is not possible to run the powershell code from SQL Server (xp_cmdshell is not an option for us) Besides this, having code outside of TX is a path we don't we to go, as it puts us back in the world of manually deploying files. It would be great if we could deploy and run powershell scripts from TX. This would make sense as an option in "Script Actions", e.g. in the right click menu there are 3 options: * Add Custom Step * Add FTP Source * Add Powershell script (Of course, even better would be if TX supported Afas Profit out of the box)
I would like to see an option added to perform a "Differential Execution". So if I have some changes to push to a new environment, I could easily deploy the differences and then only execute those objects. Currently, the only way to do that is to do a differential deploy, make a note of all the objects deployed and then manually execute each of them in the queue. Note that I edited my original post as it actually seems to be a bug that I'm working on replicating now and will submit to Support. The post now reflects another suggestion that was at the bottom of the original post, so the comments below may not make sense.
TimeXtender appears to force you to always define a decimal field with scale 38. decimal(38,xx). Unfortunately this isn't always going to work for us as there are times where we have to have decimal fields with a precision of up to 16 decimal places. Since SQL Server has a maximum "length" of 38, we're losing accuracy that we have to have in order to apply precise calculations when doing math operations, such as computing a quotient. We would really like to define some of our decimal fields as decimal(22,16). Please see the attached example of an issue we're experiencing when we're not allowed to modify the scale. You can see how the repeating 6's are limited to only 6 decimal places and not the full 16. It would really be nice to have this as an added feature in a future release of TX. Thanks!
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
OKSorry, our virus scanner detected that this file isn't safe to download.
OK