- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-24-2023 08:58 AM
Hello,
I am working on building a Langchain QA application in Databricks. I currently have 13 .txt files loaded into the DBFS and am trying to read them in iteratively with TextLoader(), load them into the RecursiveCharacterTextSplitter() from Langchain to chunk them and then add them to a Chroma Database. When running this from my local machine, there is no problem. But the application does not seem to accept files loaded from DBFS.
I have tried loading these in as string objects then loading them into the TextLoader() but that does not work either.
Has anyone found a workaround to this?
- Labels:
-
DBFS
-
Local Machine
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-24-2023 10:57 AM
I ended up tinkering around and found I needed to use the os package to access it as a '/dbfs/' filepath:
#Iterate through directory of docs, load, split then add to total list
txt_ls = []
for i in os.listdir(dir_ls):
filename = os.path.join(dir_ls, i)
loader = TextLoader(filename)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
txt_ls.append(texts)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-24-2023 10:55 AM
Try using below.
python components need prefix '/dbfs' in path. since you are using output of dbutils.fs.ls it will have prefix as 'dbfs:'
Replace loader = TextLoader(i[0]) with loader = TextLoader(i[0].replace('dbfs:','/dbfs'))
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-24-2023 10:57 AM
I ended up tinkering around and found I needed to use the os package to access it as a '/dbfs/' filepath:
#Iterate through directory of docs, load, split then add to total list
txt_ls = []
for i in os.listdir(dir_ls):
filename = os.path.join(dir_ls, i)
loader = TextLoader(filename)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
txt_ls.append(texts)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-28-2023 05:33 PM
Hi @David Kersey
Thank you for posting your question in our community! We are happy to assist you.
To help us provide you with the most accurate information, could you please take a moment to review the responses and select the one that best answers your question?
This will also help other community members who may have similar questions in the future. Thank you for your participation and let us know if you need any further assistance!