New to TimeXtender? Get Started now!
To submit a support ticket related to TimeXtender click here and for Exmon-related queries click here
Ask questions, get answers and engage with your peers
Start discussions, ask questions, get answers
Read product guides and how-to's
Submit ideas and suggestions to our team
Read the latest news from our product team
Explore and RSVP for upcoming events
Connect with like-minded professionals
sgdadfbtns
Login to app.timextender.com, or create an account
Configure User Accounts & Permissions
Create App Server in Azure, or Sandbox
Configure Execution Server Configure Jobs
Create SSL Endpoint Server (PowerBI | AAS | Qlik | Tableau | CSV) Add semantic model & endpoint Adding Tables to Semantic Model
Create MDW Storage Server (SQLDB | Snowflake | Synapse | AWS RDS) Add Data Warehouse Instance Create Data Warehouse Storage Map Tables From ODX to MDW
Add data source connection in the Portal Start with Sample Data Add your own data sources Add data source in Desktop Table and Column Selection Add Execution Task to ingest data
Create ODX Storage Server Add ODX Instance Install and configure the ODX Service Open inbound firewall so Desktop can reach ODX Service Install TimeXtender Desktop (ideally on a separate, client machine) Create ODX storage in TimeXtender Desktop
Sign-In Create ODX App Server (App Server in Azure, or Sandbox) Configure User Accounts & Permissions
Previous Paragraph. This one is super long though so you can see the difference when paragraphs span multiple line breaks. This one is super long though so you can see the difference when paragraphs span multiple line breaks. This one is super long though so you can see the difference when paragraphs span multiple line breaks. This one is super long though so you can see the difference when paragraphs span multiple line breaks. This one is super long though so you can see the difference when paragraphs span multiple line breaks. This one is super long though so you can see the difference when paragraphs span multiple line breaks. This one is super long though so you can see the difference when paragraphs span multiple line breaks. This one is super long though so you can see the difference when paragraphs span multiple line breaks.Heading OneNext Paragraph. This one is super long though so you can see the difference when paragraphs span multiple line breaks. Next Paragraph. This
hello
Can I talk to a Support Engineer for troubleshooting?
Hi all, I need to create an Identity column, How could I create it in TX? Thanks Ignacio
I’ve noticed some fancy badges below some community profiles. What do I need to do to get these as well and where can I view them all?
Hi, Let's assume I am importing from CSV file. The Data is very dirty and somehow I have stings in my int filed "ID" so the filed is interpreted as NVARCHAR(MAX). If I change data type of the field (right click > Edit Field) from string back to integer I receive a runtime error during the execution. What I was expecting was to see was corresponding entries in the _M and _L tables. Is this not part of data cleansing after all? We have this problem all over and could not find a better solution than to build custom views with TRY_CAST etc. where we lose lineage etc. What is the best way to clean up such data (keep valid rows, write errors/warnings on invalid rows)? We are talking 100+ tables so I'm looking for a highly scalable solution here not a one time work around. Thanks and BR, Tobias
After trying to import data from a REST source in the ODX I got an SQL Exception with error code 4815. After a bit of Google I ended up on this page, telling me that it had something to do with SQLBulk copy not able to match the columns in source and destination or the value of a column was larger than in the destination allowed (e.g. string with length of 200 -> nvarchar(100)). Because TimeXtender defines the columns it probably didn't have something to do with that. Because the RSD I created didn't have a column size specified, all my string type field allowed 2000 chars. Thinking that none of the string fields were going to be larger, I started changing int to double and after that all the fields to string and later even to text. None of this worked which send me back to start. After properly examining the parquet file I found 1 record that had a text field that exceeded the 2000 char limit. The next time I get this error I will do the following steps: 1. Examine data type of API
Hi, I'd love to get your thoughts on issue we're dealing in the past few days. We've noticed that transfer time from ODX to DSA is growing rapidly. This as significant impact on total running time of DWH and SLA towards our client Execution log of DSA As you can see. as insert and cleansing tasks remains pretty much the same, transfer time is growing. In terms of volume, data hasn't grown significantly Batch Id 13428 (first) - 46347863 rows Batch Id 13481 (last) 46536187 rows (+0.4%) difference in transfer time is more apparent as you zoom in into the package execution (Gantt chart) Batch Id 13428 vs last batch (13481) There is no inconsistencies in ODX que or ODX service logs found. I've also checked it there is long running transactions (sp_who2) in both TX and DWH server but could find anything out of the ordinary Our setup consists of ODX repository and DWH storage on same Azure Managed Instance. TX application server is (still) on-prem My burning question is: what c
Hello,I defined a Project Variable where I casted the output as date,When using the variable in a view TX returns an int.What am I doing wrong?
Would be great if within TX (or via external script) you could trigger Data Source transfer tasks as an initial Step within an Execution Package (or batch), so that once the transfer of new data is complete, the execution package runs the next dependant step? This way, if the transfer task from the data source to the DL takes longer (more data than expected) then the execution package won't run until it completes. There seems to be no method currently to trigger a transfer load within the execution packages - and therefore no way to make the two different scheduling tools (it's really that) align with each other intelligently.
A detailed look at how Success teams can make the best use of technology for true customer engagement.
Learn everything you need to know about our REST API and endpoints
Contact our support team and we'll be happy to help you get up and running!
Find all the guidance you need as you navigate through our success resources
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.