cancel
Showing results for 
Search instead for 
Did you mean: 
Machine Learning
cancel
Showing results for 
Search instead for 
Did you mean: 

Issues loading .txt files from DBFS into Langchain TextLoader()

David_K93
Contributor

Hello,

I am working on building a Langchain QA application in Databricks. I currently have 13 .txt files loaded into the DBFS and am trying to read them in iteratively with TextLoader(), load them into the RecursiveCharacterTextSplitter() from Langchain to chunk them and then add them to a Chroma Database. When running this from my local machine, there is no problem. But the application does not seem to accept files loaded from DBFS.

guru_errorScreenshot 2023-05-19 171751 

I have tried loading these in as string objects then loading them into the TextLoader() but that does not work either.

Has anyone found a workaround to this?

1 ACCEPTED SOLUTION

Accepted Solutions

David_K93
Contributor

I ended up tinkering around and found I needed to use the os package to access it as a '/dbfs/' filepath:

#Iterate through directory of docs, load, split then add to total list

txt_ls = []

for i in os.listdir(dir_ls):

    filename = os.path.join(dir_ls, i)

    loader = TextLoader(filename)

    documents = loader.load()

    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)

    texts = text_splitter.split_documents(documents)

    txt_ls.append(texts)

View solution in original post

3 REPLIES 3

venkatcrc
New Contributor III

Try using below.

python components need prefix '/dbfs' in path. since you are using output of dbutils.fs.ls it will have prefix as 'dbfs:'

Replace loader = TextLoader(i[0]) with loader = TextLoader(i[0].replace('dbfs:','/dbfs'))

David_K93
Contributor

I ended up tinkering around and found I needed to use the os package to access it as a '/dbfs/' filepath:

#Iterate through directory of docs, load, split then add to total list

txt_ls = []

for i in os.listdir(dir_ls):

    filename = os.path.join(dir_ls, i)

    loader = TextLoader(filename)

    documents = loader.load()

    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)

    texts = text_splitter.split_documents(documents)

    txt_ls.append(texts)

Anonymous
Not applicable

Hi @David Kersey​ 

Thank you for posting your question in our community! We are happy to assist you.

To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?

This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance! 

Welcome to Databricks Community: Lets learn, network and celebrate together

Join our fast-growing data practitioner and expert community of 80K+ members, ready to discover, help and collaborate together while making meaningful connections. 

Click here to register and join today! 

Engage in exciting technical discussions, join a group with your peers and meet our Featured Members.